Skip to content

hgaurav2k/trackr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Sensorimotor control by Imitating Predictive Models of Human Motion

This repo contains code for the paper Deep Sensorimotor control by Imitating Predictive Models of Human Motion

We capture fine-grained visual understanding of human motion from videos and then use it to train sensorimotor policies. By tracking the predictions of such model, we can train robot policies that follow human behavior without depending on excessive manual reward-engineering, real2sim, kinematic retargeting, or affordance prediction.

For a more detailed overview, check out the project webpage!

Approach overview

For any questions, please contact Himanshu Gaurav Singh.

Setup

  • Create conda environment using conda env create -f rlgpu.yaml
  • Install IsaacGym in this environment.
  • Download the asset folder and put them in the root directory.

Running the code

Training human motion prediction model on DexYCB

  • Download the hand-object interaction dataset from here. Extract using unzip -r dexycb_isaacgym.zip. Put it under the root directory.
  • Run bash scripts/hst/train_mo.sh <DATADIR>

Training robot policy by using human motion as reward

  • Pretrained checkpoint for human keypoint prediction can be found in checkpoints/track_predn.pt. You can also use your own trained checkpoint.
  • For your choice of task, run bash scripts/distmatch/distmatch_{task}.sh.

Visualising trained policies

  • Run bash scripts/visualise/visualise_{task}.sh <PATH_TO_POLICY>.

Citation

Acknowledgment

About

Visual understanding of motion for policy learning

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published