Amrith Setlur

I’m a final year PhD Student in the Machine Learning Department at Carnegie Mellon University, where I am fortunate to be advised by Virginia Smith. I am also a long term visiting researcher at UC Berkeley, advised by Sergey Levine, and collaborate very closely with Aviral Kumar. My PhD is generously supported by the JP Morgan AI PhD Fellowship award.

Research Overview

I work on fundamental principles and practical recipes for building models with test-time adaptation (TTA) capabilities. These models spend additional computation at test time on difficult instances through reasoning, search, and interaction, rather than behaving as static predictors that fail under distribution shift or limited supervision. More recently, I have focused on understanding the bottlenecks in training reasoning LLMs, improving exploration on hard problems where verification signals are weak, and, more broadly, developing RL training methods for LLMs that can continually interact, adapt, and update themselves in underspecified test environments when deployed.