Precognition Lab @ HKUST (Guangzhou) Image
Our research lab, the Precognition Lab (智能感知与预测实验室), is interested in building human-level Embodied AI systems that can effectively perceive, reason and interact with the real world for the good of humans. Here is an up-to-date research roadmap.
Our lab's computing resources include 36 RTX 3090/4090/L40 GPUs and a cluster of 24 A6000 GPUs with a 100TB NAS. See this post. And we have multiple mobile platforms with robot arms and dex hands:
Image Image
Image Image Image Image Image
Check out our lab's cool publications and demos.
Media Coverage
News
Lab Members
Publications
* indicates corresponding authors.
  1. Image
    Stairway to Success: Zero-Shot Floor-Aware Object-Goal Navigation via LLM-Driven Coarse-to-Fine Exploration
    Zeying Gong, Rong Li, Tianshuai Hu, Ronghe Qiu, Lingdong Kong, Lingfeng Zhang, Yiyi Ding, Leying Zhang, Junwei Liang*
    IEEE Robotics and Automation Letters (RA-L) 2026
  2. Image
    Exploring the Limits of Vision-Language-Action Manipulations in Cross-task Generalization
    Jiaming Zhou, Ke Ye, Jiayi Liu, Teli Ma, Zifan Wang, Ronghe Qiu, Kun-Yu Lin, Zhilin Zhao, Junwei Liang*
    NeurIPS 2025
  3. Image
    GLOVER++: Unleashing the Potential of Affordance Learning from Human Behaviors for Robotic Manipulation
    Teli Ma, Jia Zheng, Zifan Wang, Ziyao Gao, Jiaming Zhou, Junwei Liang*
    CoRL 2025
  4. Image
    GLOVER: Generalizable Open-Vocabulary Affordance Reasoning for Task-Oriented Grasping
    Teli Ma, Zifan Wang, Jiaming Zhou, Mengmeng Wang, Junwei Liang*
    CoRL 2025 GenPriors Workshop 🥇Best Paper Award
  5. Image
    Omni-Perception: Omnidirectional Collision Avoidance for Legged Locomotion in Dynamic Environments
    Zifan Wang, Teli Ma, Yufei Jia, Xun Yang, Jiaming Zhou, Wenlong Ouyang, Qiang Zhang, Junwei Liang*
    CoRL 2025 (Oral, ~5% acceptance rate)
  6. Image
    3EED: Ground Everything Everywhere in 3D
    Rong Li, Yuhao Dong, Tianshuai Hu, Ao Liang, Youquan Liu, Dongyue Lu, Liang Pan, Lingdong Kong, Junwei Liang*, Ziwei Liu*
    NeurIPS 2025
  7. Image
    Mitigating the Human-Robot Domain Discrepancy in Visual Pre-training for Robotic Manipulation
    Jiaming Zhou, Teli Ma, Kun-Yu Lin, Ronghe Qiu, Zifan Wang, Junwei Liang*
    CVPR 2025
  8. Image
    SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding
    Rong Li, Shijie Li, Lingdong Kong, Xulei Yang, Junwei Liang*
    CVPR 2025
  9. Image
    From Cognition to Precognition: A Future-Aware Framework for Social Navigation
    Zeying Gong, Tianshuai Hu, Ronghe Qiu, Junwei Liang*
    ICRA 2025
  10. Image
    GaussianProperty: Integrating Physical Properties to 3D Gaussians with LMMs
    Xinli Xu, Wenhang Ge, Dicong Qiu, ZhiFei Chen, Dongyu Yan, Zhuoyun Liu, Haoyu Zhao, Hanfeng Zhao, Shunsi Zhang, Junwei Liang*, Ying-Cong Chen*
    ICCV 2025
  11. Image
    Contrastive Imitation Learning for Language-guided Multi-Task Robotic Manipulation
    Teli Ma, Jiaming Zhou, Zifan Wang, Ronghe Qiu, Junwei Liang*
    CoRL 2024
  12. Image
    Prioritized Semantic Learning for Zero-shot Instance Navigation
    Xinyu Sun, Lizhao Liu, Hongyan Zhi, Ronghe Qiu, Junwei Liang*
    ECCV 2024
  13. Image
    Dragtraffic: Interactive and Controllable Traffic Scene Generation for Autonomous Driving
    Sheng WANG, Ge SUN, Fulong MA, Tianshuai HU, Qiang QIN, Yongkang SONG, Lei ZHU, Junwei Liang*
    IROS 2024
  14. Image
    Open-Vocabulary 3D Semantic Segmentation with Text-to-Image Diffusion Models
    Xiaoyu Zhu, Hao Zhou, Pengfei Xing, Long Zhao, Hao Xu, Junwei Liang, Alexander Hauptmann, Ting Liu, Andrew Gallagher
    ECCV 2024
  15. Image
    An Examination of the Compositionality of Large Generative Vision-Language Models
    Teli Ma, Rong Li, Junwei Liang*
    NAACL 2024

  16. Preprint.
  17. Image
    Open-vocabulary Mobile Manipulation in Unseen Dynamic Environments with 3D Semantic Maps
    Dicong Qiu, Wenzong Ma, Zhenfu Pan, Hui Xiong, Junwei Liang*