I'm an undergraduate intern at RLLAB advised by Prof. Youngwoon Lee. I'm currently in UC Berkeley as an exchange student for Spring 2026. Happy to connect if you're around! I’m interested in efficient and generalizable robot learning via reinforcement learning.
My research interests focus on making robot learning more efficient and generalizable through reinforcement learning. Since most VLA models rely heavily on imitation learning, I aim to explore how reinforcement learning can be incorporated to help these models handle out-of-distribution (OOD) situations more robustly and acquire more generalizable skills.
AMPED is a framework for skill-based reinforcement learning that simultaneously maximizes state coverage and skill diversity through several carefully designed components.
This research investigates Simplicial Normalization (SimNorm) as an activation function for multi-task reinforcement learning, showing that it underperforms ReLU in Meta-world benchmarks.