I am a Ph.D candidate advised by professor Jaegul Choo at KAIST Graduate School of AI. Previously, I interned at Meta Reality Labs and NAVER LABS as a Ph.D research intern.
My current research interest is to construct 3D generative models for physical AI, specifically through 3D world models via video diffusion and 3D neural representations. I am also interested in scaling these systems into Vision Language Action (VLA) models to enable foundation model-driven autonomy in complex driving environments.
shwang.14 [at] kaist.ac.kr
8 Seongnam-daero 331 beon-gil, KINS Tower, Suite 904, Bundang-gu, Seongnam-si, Gyeonggi 13558, South Korea
B.S | KAIST Dept. of Mechanical Engineering
2014.08 - 2022.02
HumanAnything: Spatially-Aligned Multi-Modal Video Diffusion for Human-Centric Generation
SphereDiff: Tuning-free 360° Static and Dynamic Panorama Generation via Spherical Latent Representation
Paper | Project Page | Code
SurFhead: Affine Rig Blending for Geometrically Accurate 2D Gaussian Surfel-based Head Avatars
Paper | Project Page | Code
Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting
VEGS: View Extrapolation of Urban Scenes in 3D Gaussian Splatting using Learned Priors
Paper | Project Page | Code
FaceCLIPNeRF: Text-driven 3D Face Manipulation using Deformable Neural Radiance Fields
Text2Control3D: Controllable 3D Avatar Generation in Neural Radiance Fields using Geometry-Guided Text-to-Image
Diffusion Model
Paper | Project Page | Code