Skip to content

CarstenEpic/humos

Repository files navigation

HUMOS: Human Motion Model Conditioned on Body Shape [ECCV 2024]

Code repository for the paper:
HUMOS: Human Motion Model Conditioned on Body Shape
Shashank Tripathi, Omid Taheri, Christoph Lassner, Michael J. Black, Daniel Holden, Carsten Stoll
European Conference on Computer Vision (ECCV), 2024

arXiv Website shields.io

teaser

[Project Page] [Paper] [Video] [Poster] [License] [Contact]

News 🚩

  • [2025/07/10] Released training and inference code for HUMOS

Installation and Setup

  1. First, clone the repo. Then, we recommend creating a clean conda environment, activating it and installing torch and torchvision, as follows:
git clone --recursive https://github.com/sha2nkt/humos_website_backend.git
git submodule update --init --recursive
cd humos_website_backend
conda create -n humos_p310 python=3.10
pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118
  1. Install AitViewer from our custom fork
cd aitviewer_humos
pip install -e .
cd ..
  1. Install the other dependencies
pip install -r requirements.txt
pip install -e .

Download pretrained model checkpoints

This script downloads two model checkpoints:

  1. Pretrained HUMOS auto-encoder -- used to initialize the HUMOS cycle-consistent training runs
  2. Final HUMOS model -- used for demo and inference
sh fetch_data.sh

Download the SMPL model

Go to the SMPL website, register and go to the Download tab.

  • Click on "Download version 1.1.0 for Python 2.7 (female/male/neutral, 300 shape PCs)" to download and place the files in the folder body_models/smpl/.

Download the SMPL+H model

Go to the MANO website, register and go to the Download tab.

  • Click on "Models & Code" to download mano_v1_2.zip and place it in the folder body_models/smplh/.

  • Click on "Extended SMPL+H model" to download smplh.tar.xz and place it in the folder body_models/smplh/.

The next step is to extract the archives, merge the hands from mano_v1_2 into the Extended SMPL+H models, and remove any chumpy dependency.

All of this can be done using with the following commands.

bash humos/prepare/smplh.sh

This will create SMPLH_FEMALE.npz, SMPLH_MALE.npz, SMPLH_NEUTRAL.npz inside the body_models/smplh folder.

The resulting structure for the body_models directory should look like this:

.
β”œβ”€β”€ prepare
β”‚Β Β  β”œβ”€β”€ merge_smplh_mano.py
β”‚Β Β  └── smplh.sh
β”œβ”€β”€ smpl
β”‚Β Β  β”œβ”€β”€ female
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ model.npz
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ model.pkl
β”‚Β Β  β”‚Β Β  └── smpl_female.bvh
β”‚Β Β  β”œβ”€β”€ male
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ model.npz
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ model.pkl
β”‚Β Β  β”‚Β Β  └── smpl_male.bvh
β”‚Β Β  └── neutral
β”‚Β Β      β”œβ”€β”€ model.npz
β”‚Β Β      └── model.pkl
└── smplh
    β”œβ”€β”€ female
    β”‚Β Β  └── model.npz
    β”œβ”€β”€ male
    β”‚Β Β  └── model.npz
    β”œβ”€β”€ neutral
    β”‚Β Β  └── model.npz
    β”œβ”€β”€ SMPLH_FEMALE.npz
    β”œβ”€β”€ SMPLH_FEMALE.pkl
    β”œβ”€β”€ SMPLH_MALE.npz
    β”œβ”€β”€ SMPLH_MALE.pkl
    β”œβ”€β”€ SMPLH_NEUTRAL.npz

Preparing the AMASS dataset

  1. Please run all blocks of humos/prepare/raw_pose_processing_humos.ipynb in a jupyter notebook.
  2. Clean treadmill sequences from BMLrub (BML_NTroje) and MPI_HDM05 by running python humos/prepare/clean_amass_data.py
  3. Extract HUMOS features using python humos/prepare/compute_3dfeats.py --fps 20
  4. Process text annotations by removing paths that don't exist python humos/prepare/process_text_annotations.py
  5. Get dataset mean of all the input features python humos/prepare/motion_stats.py
  6. Get random body shapes - used for inference python humos/prepare/sample_body_shapes.py

Run inference on HUMOS

python humos/test.py --cfg humos/configs/cfg_template_test.yml

In case you want to only visualize some sample results without running metric evaluation, you can set the following flags to false in humos/configs/cfg_template_test.yml. This is typically faster.

METRICS:
  DYN_STABILITY: False
  RECONS: False
  PHYSICS: False
  MOTION_PRIOR: False

Currently, the motions are rendered using the AitViewer and visualized in Wandb. You can change the visualizer to whatever you prefer by modifying the on_validation_epoch_end() function in humos/src/model/tmr_cyclic.py.

Run HUMOS training

python humos/train.py --cfg humos/configs/cfg_template.yml

Running HUMOS DEMO

The following script runs the HUMOS demo by randomly mixing AMASS identities and saves the sequence of meshes as an obj at ./demo/humos_{ckpt_name}_fit_objs/pred. The sequences are also visualized in Wandb.

python humos/train.py --cfg humos/configs/cfg_template_demo.yml
Generate videos for identity B Generated video for identity B overlaid with input motion A
Generate videos for identity B Generated video for identity B overlaid with input motion A

Citing

If you find this code useful for your research, please consider citing the following papers:

@InProceedings{tripathi2024humos,
    author    = {Tripathi, Shashank and Taheri, Omid and Lassner, Christoph and Black, Michael J. and Holden, Daniel and Stoll, Carsten},
    title     = {{HUMOS}: Human Motion Model Conditioned on Body Shape},
    booktitle = {European Conference on Computer Vision},
    organization = {Springer},
    year      = {2025},
    pages     = {133--152},
}

Several parts of this code are heavily derived from the IPMAN and TMR. Please also consider citing this work:

@inproceedings{tripathi2023ipman,
    title     = {{3D} Human Pose Estimation via Intuitive Physics},
    author    = {Tripathi, Shashank and M{\"u}ller, Lea and Huang, Chun-Hao P. and Taheri Omid and Black, Michael J. and Tzionas, Dimitrios},
    booktitle = {Conference on Computer Vision and Pattern Recognition ({CVPR})},
    pages = {4713--4725},
    year = {2023},
    url = {https://ipman.is.tue.mpg.de}
}
@inproceedings{petrovich23tmr,
    title     = {{TMR}: Text-to-Motion Retrieval Using Contrastive {3D} Human Motion Synthesis},
    author    = {Petrovich, Mathis and Black, Michael J. and Varol, G{\"u}l},
    booktitle = {International Conference on Computer Vision ({ICCV})},
    year      = {2023}
}

License

See LICENSE.

Acknowledgments

We sincerely thank Tsvetelina Alexiadis, Alpar Cseke, Tomasz Niewiadomski, and Taylor McConnell for facilitating the perceptual study, and Giorgio Becherini for his help with the Rokoko baseline. We are grateful to Iain Matthews, Brian Karis, Nikos Athanasiou, Markos Diomataris, and Mathis Petrovich for valuable discussions and advice. Their invaluable contributions enriched this research significantly.

Contact

For technical questions, please create an issue. For other questions, please contact [email protected].

About

Humos paper repository

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •