Image

Joel Jang

Senior Research Scientist @ Nvidia GEAR Lab

[email protected]

About
Hi, I am a Senior Research Scientist at Nvidia GEAR Lab, leading the world model team for Project GR00T. Our main research agenda is to find a scaling axis for improving robot learning by scaling the total amount of GPUs with world models. Do reach out if you are interested in joining our team!
News
May 2025     We release DreamGen, the first method that enables visuomotor robot policies to perform new verbs in new environments. [blog][VentureBeat][Youtube (short)][Youtube (talk)]
May 2025     Glad to share that The BiGGen Bench has won the best paper award in NAACL 2025!
March 2025     We release GR00T N1, the first foundation model for humanoid robots. I led the Video Generation Models and Latent Actions team.
Jan 2025     LAPA, HAMSTER, and Knowledge Entropy have been accepted to ICLR 2025!
Show more ↓
Education

University of WashingtonSep. 2023 - May 2025 (on leave)

Ph.D. Student (advisors: Luke Zettlemoyer and Dieter Fox)

Korea Advanced Institute of Science and Technology (KAIST) Mar. 2021 - Aug. 2023

M.S. in Artificial Intelligence (advisor: Minjoon Seo)

Korea UniversityMar. 2017 - Feb. 2021

B.S. in Computer Science

Publications

2026

GRAPE: Generalizing Robot Policy via Preference Alignment ICRA 2026

Zijian Zhang*, Kaiyuan Zheng*, Zhaorun Chen*, Joel Jang, Yi Li, Chaoqi Wang, Mingyu Ding, Dieter Fox, Huaxiu Yao

2025

Comsos-Predict2.5: World Simulation with Video Foundation Models for Physical AI Technical Whitepaper

NVIDIA

ReAgent-V: A Reward-Driven Multi-Agent Framework for Video Understanding NeurIPS 2025

Yiyang Zhou, Yangfan He, Yaofeng Su, Siwei Han, Joel Jang, Gedas Bertasius, Mohit Bansal, Huaxiu Yao

FLARE: Robot Learning with Implicit World Modeling CoRL 2025

Ruijie Zheng*, Jing Wang*, Scott Reed*, Johan Bjorck, Yu Fang, Fengyuan Hu, Joel Jang, Kaushil Kundalia, Zongyu Lin, Loic Magne, Avnish Narayan, You Liang Tan, Guanzhi Wang, Qi Wang, Jiannan Xiang, Yinzhen Xu, Seonghyeon Ye, Jan Kautz, Furong Huang, Yuke Zhu†, Linxi Fan†

DreamGen: Unlocking Generalization in Robot Learning through Video World Models CoRL 2025

Joel Jang*, Seonghyeon Ye*, Zongyu Lin*, Jiannan Xiang*, Johan Bjorck, Ruijie Zheng, Yu Fang, Fengyuan Hu, Spencer Huang, Kaushil Kundalia, Yen-Chen Lin, Loic Magne, Ajay Mandlekar, Avnish Narayan, You Liang Tan, Guanzhi Wang, Jing Wang, Qi Wang, Yinzhen Xu, Xiaohui Zeng, Kaiyuan Zheng, Ming-Yu Liu, Luke Zettlemoyer, Dieter Fox, Jan Kautz, Scott Reed†, Yuke Zhu†, Linxi Fan†

GR00T N1: An Open Foundation Model for Generalist Humanoid Robots Technical Whitepaper

NVIDIA GEAR Team

Magma: A Foundation Model for Multimodal AI Agents CVPR 2025

Jianwei Yang, Reuben Tan, Qianhui Wu, Ruijie Zheng, Baolin Peng, Yongyuan Liang, Yu Gu, Mu Cai, Seonghyeon Ye, Joel Jang, Yuquan Deng, Lars Liden, Jianfeng Gao

Latent Action Pretraining from Videos ICLR 2025

Seonghyeon Ye*, Joel Jang*, Byeongguk Jeon, Sejune Joo, Jianwei Yang, Baolin Peng, Ajay Mandlekar, Reuben Tan, Yu-Wei Chao, Bill Yuchen Lin, Lars Liden, Kimin Lee, Jianfeng Gao, Luke Zettlemoyer, Dieter Fox, Minjoon Seo

Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition ICLR 2025

Jiyeon Kim*, Hyunji Lee*, Hyowon Cho, Joel Jang, Hyeonbin Hwang, Seungpil Won, Youbin Ahn, Dohaeng Lee, Minjoon Seo

HAMSTER: Hierarchical Action Models for Open-World Robot Manipulation ICLR 2025

Yi Li*, Yuquan Deng*, Jesse Zhang*, Joel Jang, Marius Memmel, Caelan Reed Garrett, Fabio Ramos, Dieter Fox, Anqi Li, Abhishek Gupta, Ankit Goyal

The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models NAACL 2025 (best paper)

Seungone Kim, Juyoung Suk, Ji Yong Cho, Shayne Longpre, Chaeeun Kim, Dongkeun Yoon, Guijin Son, Yejin Cho, Sheikh Shafayat, Jinheon Baek, Sue Hyun Park, Hyeonbin Hwang, Jinkyung Jo, Hyowon Cho, Haebin Shin, Seongyun Lee, Hanseok Oh, Noah Lee, Namgyu Ho, Se June Joo, Miyoung Ko, Yoonjoo Lee, Hyungjoo Chae, Jamin Shin, Joel Jang, Seonghyeon Ye, Bill Yuchen Lin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, Minjoon Seo

2024

Semiparametric Token-Sequence Co-Supervision ACL 2024

Hyunji Lee*, Doyoung Kim*, Jihoon Jun, Sejune Joo, Joel Jang, Kyoung-Woon Oh, Minjoon Seo

LangBridge: Multilingual Reasoning Without Multilingual Supervision ACL 2024

Dongkeun Yoon, Joel Jang, Sungdong Kim, Seungone Kim, Sheikh Shafayat, Minjoon Seo

Exploring the Practicality of Generative Retrieval on Dynamic Corpora EMNLP 2024

Chaeeun Kim*, Soyoung Yoon*, Hyunji Lee, Joel Jang, Sohee Yang, Minjoon Seo

How Well Do Large Language Models Truly Ground? NAACL 2024

Hyunji Lee*, Sejune Joo*, Chaeeun Kim, Joel Jang, Doyoung Kim, Kyoung-Woon Oh, Minjoon Seo

Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging NeurIPS 2024 AFM Workshop

Joel Jang, Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, Prithviraj Ammanabrolu

Prometheus: Inducing Fine-grained Evaluation Capability in Language Models ICLR 2024

Seungone Kim*, Jamin Shin*, Yejin Cho*, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo

Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis TACL 2024

Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo

2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning EMNLP 2023

Seungone Kim*, Se June Joo*, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

Retrieval of Soft Prompt Enhances Zero-shot Task Generalization EMNLP 2023 Findings

Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo

Knowledge Unlearning for Mitigating Privacy Risks in Language Models ACL 2023

Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo

Gradient Ascent Post-training Enhances Language Model Generalization ACL 2023 (short)

Dongkeun Yoon*, Joel Jang*, Sungdong Kim, Minjoon Seo

Prompt Injection: Parameterization of Fixed Inputs ACL 2023 Findings

Eunbi Choi, Yongrae Jo, Joel Jang, Minjoon Seo

Exploring the Benefits of Training Expert Language Models over Instruction Tuning ICML 2023

Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

Guess the Instruction! Making Language Models Stronger Zero-Shot Learners ICLR 2023

Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo

2022

Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts NeurIPS 2022 Workshop (TL4NLP)

Joel Jang*, Seonghyeon Ye*, Minjoon Seo

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models EMNLP 2022

Joel Jang*, Seonghyeon Ye*, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo

Towards Continual Knowledge Learning of Language Models ICLR 2022

Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo

2021

Sequential targeting: A continual learning approach for data imbalance in text classification Expert Systems with Applications (2021)

Joel Jang, Yoonjeon Kim, Kyoungho Choi, Sungho Suh

( * indicates equal contribution )

Vitæ

Full CV in PDF.

Nvidia GEAR Lab March 2025 - Present (Ongoing)
Senior Research Scientist
Leading the world modeling team
Nvidia GEAR Lab Oct 2024 - Feb 2025 (6 months)
Research Intern (Mentors: Scott Reed, Yuke Zhu, and Jim Fan)
Scalable methods for creating Foundation Models for Robotics
Nvidia Robotics Lab March 2024 - Sep 2024 (6 months)
Research Intern (Mentors: Ajay Mandlekar and Dieter Fox)
Scalable methods for creating Foundation Models for Robotics
Allen Institute for AI (AI2) June 2023 - Jan 2024 (8 months)
Research Intern (Mentors: Prithviraj Ammanabrolu, Yuchen Lin, Yejin Choi)
Personalized RLHF
University of Washington Sep. 2023 - May 2025 (1 year 8 months)
Ph.D. Student (Advisor: Luke Zettlemoyer and Dieter Fox)
Foundation Models for Generalist AI Robots
LG AI Research July 2022 - May 2023 (11 months)
Research Intern (Mentors: Moontae Lee, Lajanugen Logeswaran)
Working on developing LMs that can generalize to novel tasks
KAIST Language & Knowledge Lab Mar. 2021 - Aug. 2023 (2 years 5 months)
M.S. Student (Advisor: Minjoon Seo)
Continual Adaptation of LLMs
Kakao Brain Dec. 2020 - Feb. 2021 (3 months)
Research Intern (Mentor: Ildoo Kim)
Worked on large-scale representation learning
NAVER Jul. 2020 - Sept. 2020 (3 months)
Software Engineer Intern
Worked on continual learning on hate speech detection
KIST Europe Aug. 2019 - Jan. 2020 (6 months)
Research Intern (Mentor: Sungho Suh)
Worked on machine prognostics through ML
Korea University Mar. 2017 - Feb. 2021 (4 years)
B.S. in Computer Science