This repo contains code for the paper Deep Sensorimotor control by Imitating Predictive Models of Human Motion
We capture fine-grained visual understanding of human motion from videos and then use it to train sensorimotor policies. By tracking the predictions of such model, we can train robot policies that follow human behavior without depending on excessive manual reward-engineering, real2sim, kinematic retargeting, or affordance prediction.
For a more detailed overview, check out the project webpage!
For any questions, please contact Himanshu Gaurav Singh.
- Create conda environment using
conda env create -f rlgpu.yaml - Install IsaacGym in this environment.
- Download the asset folder and put them in the root directory.
- Download the hand-object interaction dataset from here. Extract using
unzip -r dexycb_isaacgym.zip. Put it under the root directory. - Run
bash scripts/hst/train_mo.sh <DATADIR>
- Pretrained checkpoint for human keypoint prediction can be found in
checkpoints/track_predn.pt. You can also use your own trained checkpoint. - For your choice of
task, runbash scripts/distmatch/distmatch_{task}.sh.
- Run
bash scripts/visualise/visualise_{task}.sh <PATH_TO_POLICY>.
