I obtained a Bachelor's degree in Automation from Harbin Institute of Technology, Shenzhen, and a minor Bachelor's degree in Computer Science as an outstanding graduate. Both my graduation projects in the Automation major and the Computer Science minor received Best Bachelor Thesis Awards, making me the first person in HIT's history to receive dual Best Bachelor Thesis Awards. During my undergraduate study, I am fortunate to be mentored by Prof. Haoyao Chen and Prof. Yongyong Chen.
My research interests mainly include reinforcement learning🖥️, Computer haptics🖐️ and robotics🤖. I’m particularly interested in the integration of reinforcement learning algorithms with actual robotic hardware🦾.
Apart from this, I am also particularly interested in medicine🩺 and electronic circuits🔌.
We present MoE-DP, a diffusion-policy framework with a Mixture-of-Experts layer that decomposes visuomotor skills into interpretable subtasks and greatly boosts robustness. It improves disturbed success rates by 36% across long-horizon tasks and transfers effectively to real robots. The expert structure also enables subtask reordering without retraining.
A leading model-free visual RL algorithm which achieves state-of-the-art performances in learning efficiency and performance on various tasks. We can even directly train it on the real robot!
We build a mobile manipulator with a dexterous hand, and leverage reinforcement learning to train a whole-body control policy for the robot to catch diverse objects randomly thrown by humans.
We propose one novel IMVC method named Data Completion-guided Unified Graph Learning (DCUGL), which could complete the data of missing views and fuse multiple learned view-specific similarity matrices into one unified graph.
We present RiEMann, an end-to-end near Real-time SE(3)-Equivariant Robot Manipulation imitation learning framework from scene point cloud input. Compared to previous methods that rely on descriptor field matching, RiEMann directly predicts the target poses of objects for manipulation without any object segmentation.
We present ClothesNet: a large-scale dataset of 3D clothes objects with information-rich annotations. Our dataset consists of around 4400 models covering 11 categories annotated with clothes features, boundary lines, and key points.