My research interest lies in robotics and computer vision, with the goal of
enabling robots to perform dextrous tasks with adaptability, generalizability
and safety.
We introduce DexGarmentLab, a realistic sim environment for bimanual dexterous garment manipulation. Based on this environment, we propose a new benchmark, an efficient data collection pipeline, and a novel policy framework that uses category-level visual correspondences for few-shot garment manipulation.
We propose to learn point-level affordance to model the complex space and multi-modal manipulation candidates of garment piles, with novel designs for the awareness of garment geometry, structure, inter-object relations, and further adaptation.
We present GarmentLab, a benchmark designed for garment manipulation within realistic 3D indoor scenes. Our benchmark encompasses a diverse range of garment types, robotic systems and manipulators including dexterous hands. The multitude of tasks included in the benchmark enables further exploration of the interactions between garments, deformable objects, rigid bodies, fluids, and avatars.