I am a Postdoctoral Fellow at the School of Engineering and Applied Sciences (SEAS) at Harvard University. My hosts are Boaz Barak and Sham Kakade.
I study the algorithms and architectures that make large language models work. My research focuses on large language model architecture design, optimization, and long-context capabilities. I have also done work on post-training and reinforcement learning.
Prior to coming to Harvard, I did my PhD at Princeton University advised by Boris Hanin. During that time, I interned at Facebook AI Research, Google Deepmind and Google Research. And before that, I undergraduated at Ecole Normale Superieure de Lyon in France.
Matching Features, Not Tokens: Energy-Based Fine-Tuning of Language Models Samy Jelassi*, Mujin Kwun*, Rosie Zhao*, Yuanzhi Li, Nicolo Fusi, Yilun Du, Sham Kakade, Carles Domingo-Enrich* submitted, 2026. *Equal contribution
Let's (not) just put things in Context: Test-time Training for Long-context LLMs
Rachit Bansal, Aston Zhang, Rishabh Tiwari, Lovish Madaan, Sai Surya Duvvuri, Fnu Devvrit, David Brandfonbrener, David Alvarez-Melis, Prajjwal Bhargava, Mihir Kale, Samy Jelassi International Conference on Learning Representations (ICLR), 2026.