I am the Harold R. and Mary Anne Nash Early Career Professor and Assistant Professor in the School of Electrical and Computer Engineering and H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology. I received the B.Tech (with honors) degree from the Indian Institute of Technology, Madras and the Ph.D. degree in Electrical Engineering from University of California, Berkeley. Before joining Georgia Tech, I spent a semester at the Simons Institute for the Theory of Computing as a research fellow for the program “Theory of Reinforcement Learning.”
My broad interests are in game theory, online and statistical learning. I am particularly interested in designing learning algorithms that provably adapt in strategic environments, fundamental properties of overparameterized models, and the foundations of multi-agent decision-making. In my spare time, I enjoy singing Carnatic vocal music, playing the piano, and long-distance cycling.
See here for a more formal bio in the third person.
Recent News
-
- April 2026: I had a great time presenting our work on kernel interpolation and approximation at the Harvard Statistics Colloquium.
- March 2026: I presented our recent work on a general technique for approximating empirical high-dimensional kernel matrices (remotely) at the Oberwolfach workshop on “Modern and Emerging Phenomena in Machine Learning”.
- February 2026: I enjoyed presenting our work on the surprising last-iterate convergence of exponential weights in 2×2 symmetric general-sum games at Information Theory and Applications (ITA) 2026 and Algorithmic Learning Theory (ALT) 2026. This work won the best student paper award at ALT 2026. A big congratulations to all my co-authors and especially lead student author Guanghui Wang!
- December 2025: Our work on the impact of general training losses in high dimensions, led by Kuo-Wei Lai, is published in Journal of Machine Learning Research. Congratulations, Kuo-Wei!
- October 2025: Our faster rates for generic and adversarially robust optimization methods, achieved through an online learning/game approach and led by Guanghui Wang, are published in Mathematical Programming Series B (special issue on Optimization for Machine Learning). Congratulations, Guanghui!
- July 2025: My former postdoc Rohan Ghuge presented our work on improved and oracle-efficient online $\ell_1$ multicalibration at ICML 2025. Congratulations, Rohan!
- July 2025: I had a really nice time presenting our work on the impact of general training losses and out-of-distribution error analysis at the Youth in High Dimensions workshop in Trieste, Italy.
- July 2025: My student Milind Nakul presented work that he led on estimating the stationary mass of a distribution on a mixing sequence frequency by frequency at COLT 2025. Congratulations, Milind!
- June 2025: I presented our work on optimal estimation of stationary missing mass on a mixing sequence at the International Indian Statistical Association conference in Lincoln, Nebraska.
- June 2025: A big congratulations to Rohan Ghuge, who has started his position as an Assistant Professor at UT Austin’s McCombs School of Business!
- May 2025: My students Tyler LaBonte and Kuo-Wei Lai will be presenting work that they co-led on formulating and characterizing a notion of “task shift” between classification and regression at AISTATS 2025. Congratulations, Tyler and Kuo-Wei!
- April 2025: I had a great time presenting our work on the impact of general training losses and out-of-distribution error analysis at Arizona State University, and our work on adaptive oracle-efficient online learning at Northwestern University.
- April 2025: I am honored to have received the ECE Roger Webb Outstanding Junior Faculty Award.