I am a third year Ph.D student in KAIST Graduate school of AI, advised by Minjoon Seo and Kimin Lee. I am also currently a research intern at NVIDIA GEAR team, led by Jim Fan and Yuke Zhu. I am interested in building robotic foundation models.
Publications
2025
DreamGen: Unlocking Generalization in Robot Learning through Neural Trajectories
Joel Jang*, Seonghyeon Ye*, Zongyu Lin*, Jiannan Xiang*, Johan Bjorck, Yu Fang, Fengyuan Hu, Spencer Huang, Kaushil Kundalia, Yen-Chen Lin, Loic Magne, Ajay Mandlekar, Avnish Narayan, You Liang Tan, Guanzhi Wang, Jing Wang, Qi Wang, Yinzhen Xu, Xiaohui Zeng, Kaiyuan Zheng, Ruijie Zheng, Ming-Yu Liu, Luke Zettlemoyer, Dieter Fox, Jan Kautz, Scott Reed*, Yuke Zhu*, Linxi "Jim" Fan* [paper][website]
GR00T N1: An Open Foundation Model for Generalist Humanoid Robots
NVIDIA, Johan Bjorck, Fernando Castañeda, Nikita Cherniadev, Xingye Da, Runyu Ding, Linxi "Jim" Fan, Yu Fang, Dieter Fox, Fengyuan Hu, Spencer Huang, Joel Jang, Zhenyu Jiang, Jan Kautz, Kaushil Kundalia, Lawrence Lao, Zhiqi Li, Zongyu Lin, Kevin Lin, Guilin Liu, Edith Llontop, Loic Magne, Ajay Mandlekar, Avnish Narayan, Soroush Nasiriany, Scott Reed, You Liang Tan, Guanzhi Wang, Zu Wang, Jing Wang, Qi Wang, Jiannan Xiang, Yuqi Xie, Yinzhen Xu, Zhenjia Xu, Seonghyeon Ye, Zhiding Yu, Ao Zhang, Hao Zhang, Yizhou Zhao, Ruijie Zheng, Yuke Zhu [paper][code][website]
Magma: A Foundation Model for Multimodal AI Agents
Jianwei Yang, Reuben Tan, Qianhui Wu, Ruijie Zheng, Baolin Peng, Yongyuan Liang, Yu Gu, Mu Cai, Seonghyeon Ye, Joel Jang, Yuquan Deng, Lars Liden, Jianfeng Gao CVPR 2025 [paper][code][website]
Latent Action Pretraining from Videos
Seonghyeon Ye*, Joel Jang*, Byeongguk Jeon, Sejune Joo, Jianwei Yang, Baolin Peng, Ajay Mandlekar, Reuben Tan, Yu-Wei Chao, Bill Yuchen Lin, Lars Liden, Kimin Lee*, Jianfeng Gao*, Luke Zettlemoyer*, Dieter Fox*, Minjoon Seo* ICLR 2025 LangRob Workshop @ CoRL 2024Best Paper [paper][code] [website]
2024
How Do Large Language Models Acquire Factual Knowledge During Pretraining?
Hoyeon Chang, Jinho Park, Seonghyeon Ye, Sohee Yang, Youngkyung Seo, Du-Seong Chang, Minjoon Seo NeurIPS 2024 [paper][code]
Instruction Matters, a Simple yet Effective Task Selection Approach in Instruction Tuning for Specific Tasks
Changho Lee, Janghoon Han, Seonghyeon Ye, Stanley Jungkyu Choi, Honglak Lee, Kyunghoon Bae EMNLP 2024 [paper][code]
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, Minjoon Seo EMNLP 2024 Findings [paper][code]
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
Seonghyeon Ye*, Doyoung Kim*, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo ICLR 2024Spotlight [paper][code]
Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis
Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo TACL 2024 [paper][code]
Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following
Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo AAAI 2024 [paper][code]
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
Yujin Kim, Jaehong Yoon, Seonghyeon Ye, Sung Ju Hwang, Se-young Yun NAACL 2024 [paper]
2023
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-tuning
Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo EMNLP 2023 [paper][code]
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo EMNLP 2023 Findings [paper][code]
Exploring the Benefits of Training Expert Language Models over Instruction Tuning
Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo ICML 2023 [paper][code]
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo ICLR 2023 [paper][code]
SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation
Seonghyeon Ye*, Yongrae Jo*, Doyoung Kim*, Sungdong Kim, Hyeonbin Hwang, Minjoon Seo Blog post [blog][code]
2022
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
Joel Jang*, Seonghyeon Ye*, Minjoon Seo Transfer Learning for NLP Workshop @ NeurIPS 2022 [paper][code]
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
Joel Jang*, Seonghyeon Ye*, Chango Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo EMNLP 2022 [paper][code]
Towards Continual Knowledge Learning of Language Models
Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo ICLR 2022 [paper][code]
2021
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning
Seonghyeon Ye, Jiseon Kim, Alice Oh EMNLP 2021 (short) [paper][code]
Dimensional Emotion Detection from Categorical Emotion
Sungjoon Park, Jiseon Kim, Seonghyeon Ye, Jaeyeol Jeon, Hee Young Park, Alice Oh EMNLP 2021 [paper][code]
Education
KAIST AI
M.S. & Ph.D. in Artificial Intelligence, 2022 - Present
Advisor: Minjoon Seo, Kimin Lee
KAIST CS
B.S. in Computer Science, 2017 - 2021
Advisor: Alice Oh, Jong C. Park
Work Experience
NVIDIA GEAR
Research Intern, Dec 2024 - Present
Working with Jim Fan, Yuke Zhu