In my free time, I do origami, hackathons, and run/lift. I’m a huge fan of birds (especially lovebirds), books (especially those by Murakami), and indie or alt. music (especially that of Radiohead).
If you’d be interested in working with me, feel free to shoot me an email!
Events & News
September 2023 - One paper accepted to NeurIPS 2023!
August 2023 - I was featured on the TWIML AI Podcast! You can watch my interview here.
May 2023 - I’ll be spending the summer at Google, working with Alekh Agarwal, Chris Dann, and Rahul Kidambi on better algorithms for RLHF (reinforcement learning from human feedback).
May 2023 - I passed my thesis proposal! Check out my talk if you’re curious about the how the work I’ve been doing over the last few years fits together.
April 2023 - Our paper on exponentially faster algorithms for inverse reinforcement learning was accepted to ICML ‘23. The computational efficiency our algorithm provides complements the statistical efficiency our work at NeurIPS helped develop.
We derive exponentially faster algorithms for inverse RL by resetting the learner to states from the expert demonstrations within the RL subroutine. Our work was published at ICML '23. [Website][Paper]
We construct a taxonomy for imitation learning algorithms, derive bounds for each class, construct novel reduction-based algorithmic templates that achieve these bounds, and implement simple elegant realizations with competitive emperical performance. Published at ICML 2021. [Website][Blog]