
Hi there! I’m Gokul, a PhD candidate in the Robotics Institute at Carnegie Mellon University working on efficient interactive learning with unobserved confounders.
I work with Drew Bagnell, Steven Wu, and Sanjiban Choudhury. I completed an M.S. at UC Berkeley with Anca Dragan on Learning with Humans in the Loop.
I’ve spent summers working as an ML Intern @ SpaceX, Autonomous Vehicles Perception Intern @ NVIDIA, Motion Planning ML Intern @ Aurora, and Research Intern @ MSR.
In my free time, I do origami, hackathons, and run/lift. I’m a huge fan of birds (especially lovebirds), books (especially those by Murakami), and indie or alt. music (especially that of Radiohead).
If you’d be interested in working with me, feel free to shoot me an email!
Events & News
May 2023 - I’ll be spending the summer at Google, working with Alekh Agarwal.
May 2023 - I passed my thesis proposal! Check out my talk if you’re curious about the how the work I’ve been doing over the last few years fits together.
April 2023 - Our paper on exponentially faster algorithms for inverse reinforcement learning was accepted to ICML ‘23. The computational efficiency our algorithm provides complements the statistical efficiency our work at NeurIPS helped develop.
Research Highlights
Imitation w/ Unobserved Contexts

We describe algorithms for and conditions under which it is possible to imitate an expert who has access to privleged information. Our work was published at NeurIPS 2022. [Website][Blog]
Minimax Optimal Online IL

We derive the minimax optimal algorithm for imitation learning in the finite demonstration regime (i.e. an algorithm that is better than both online and offline IL in the worst case). Our work was published at NeurIPS 2022. [Website][Paper]
Causal Imitation Learning under TCN

We use instrumental variable regression to derive imitation learning algorithms that are robust against temporally correlated noise both in theory and practice. Oral at ICML 2022. [Website][Paper]
Of Moments and Matching

We construct a taxonomy for imitation learning algorithms, derive bounds for each class, construct novel reduction-based algorithmic templates that achieve these bounds, and implement simple elegant realizations with competitive emperical performance. Published at ICML 2021. [Website][Blog]