Hi there! I’m Gokul, a PhD student in the Robotics Institute at Carnegie Mellon University working on interactive learning under causal confounding.
I work with Drew Bagnell, Steven Wu, and Sanjiban Choudhury. I completed an M.S. at UC Berkeley with Anca Dragan on Learning with Humans in the Loop.
I’ve spent summers working as a Data Engineering Intern @ SpaceX, Autonomous Vehicles Perception Intern @ NVIDIA, Motion Planning ML Intern @ Aurora, and Graduate Research Intern @ MSR.
In my free time, I do origami, hackathons, and lift. I’m a huge fan of birds (especially lovebirds), books (especially those by Murakami), and indie or alt. music (especially that of Radiohead).
If you’d be interested in working with me, feel free to shoot me an email!
Events & News
September 2022 - Two papers accepted to NeurIPS ‘22! One paper on minimax optimal online imitation learning and another paper on imitating an expert who has access to privileged information! Also, I was named as a top reviewer for the conference.
September 2022 - Presenting on our graduate student mentoring workshop at the Eberly Center Teaching and Learning Summit!
May 2022 - Our paper on causal imitation learning was accepted for oral presentation at ICML ‘22!
March 2022 - I’ll be spending the summer at MSR working with Geoff Gordon!
Imitation w/ Unobserved Contexts
We describe algorithms for and conditions under which it is possible to imitate an expert who has access to privleged information. Our work was published at NeurIPS 2022. [Website][Blog]
Minimax Optimal Online IL
We derive the minimax optimal algorithm for imitation learning in the finite demonstration regime (i.e. an algorithm that is better than both online and offline IL in the worst case). Our work was published at NeurIPS 2022. [Website][Paper]
Causal Imitation Learning under TCN
We use instrumental variable regression to derive imitation learning algorithms that are robust against temporally correlated noise both in theory and practice. Oral at ICML 2022. [Website][Paper]
Of Moments and Matching
We construct a taxonomy for imitation learning algorithms, derive bounds for each class, construct novel reduction-based algorithmic templates that achieve these bounds, and implement simple elegant realizations with competitive emperical performance. Published at ICML 2021. [Website][Blog]