Code repository for the paper:
HUMOS: Human Motion Model Conditioned on Body Shape
Shashank Tripathi, Omid Taheri, Christoph Lassner, Michael J. Black, Daniel Holden, Carsten Stoll
European Conference on Computer Vision (ECCV), 2024
[Project Page] [Paper] [Video] [Poster] [License] [Contact]
- [2025/07/10] Released training and inference code for HUMOS
- First, clone the repo. Then, we recommend creating a clean conda environment, activating it and installing torch and torchvision, as follows:
git clone --recursive https://github.com/sha2nkt/humos_website_backend.git
git submodule update --init --recursive
cd humos_website_backend
conda create -n humos_p310 python=3.10
pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118- Install AitViewer from our custom fork
cd aitviewer_humos
pip install -e .
cd ..- Install the other dependencies
pip install -r requirements.txt
pip install -e .This script downloads two model checkpoints:
- Pretrained HUMOS auto-encoder -- used to initialize the HUMOS cycle-consistent training runs
- Final HUMOS model -- used for demo and inference
sh fetch_data.shGo to the SMPL website, register and go to the Download tab.
- Click on "Download version 1.1.0 for Python 2.7 (female/male/neutral, 300 shape PCs)" to download and place the files in the folder
body_models/smpl/.
Go to the MANO website, register and go to the Download tab.
-
Click on "Models & Code" to download
mano_v1_2.zipand place it in the folderbody_models/smplh/. -
Click on "Extended SMPL+H model" to download
smplh.tar.xzand place it in the folderbody_models/smplh/.
The next step is to extract the archives, merge the hands from mano_v1_2 into the Extended SMPL+H models, and remove any chumpy dependency.
All of this can be done using with the following commands.
bash humos/prepare/smplh.shThis will create SMPLH_FEMALE.npz, SMPLH_MALE.npz, SMPLH_NEUTRAL.npz inside the body_models/smplh folder.
The resulting structure for the body_models directory should look like this:
.
βββ prepare
βΒ Β βββ merge_smplh_mano.py
βΒ Β βββ smplh.sh
βββ smpl
βΒ Β βββ female
βΒ Β βΒ Β βββ model.npz
βΒ Β βΒ Β βββ model.pkl
βΒ Β βΒ Β βββ smpl_female.bvh
βΒ Β βββ male
βΒ Β βΒ Β βββ model.npz
βΒ Β βΒ Β βββ model.pkl
βΒ Β βΒ Β βββ smpl_male.bvh
βΒ Β βββ neutral
βΒ Β βββ model.npz
βΒ Β βββ model.pkl
βββ smplh
βββ female
βΒ Β βββ model.npz
βββ male
βΒ Β βββ model.npz
βββ neutral
βΒ Β βββ model.npz
βββ SMPLH_FEMALE.npz
βββ SMPLH_FEMALE.pkl
βββ SMPLH_MALE.npz
βββ SMPLH_MALE.pkl
βββ SMPLH_NEUTRAL.npz
- Please run all blocks of
humos/prepare/raw_pose_processing_humos.ipynbin a jupyter notebook. - Clean treadmill sequences from BMLrub (BML_NTroje) and MPI_HDM05 by running
python humos/prepare/clean_amass_data.py - Extract HUMOS features using
python humos/prepare/compute_3dfeats.py --fps 20 - Process text annotations by removing paths that don't exist
python humos/prepare/process_text_annotations.py - Get dataset mean of all the input features
python humos/prepare/motion_stats.py - Get random body shapes - used for inference
python humos/prepare/sample_body_shapes.py
python humos/test.py --cfg humos/configs/cfg_template_test.ymlIn case you want to only visualize some sample results without running metric evaluation, you can set the following flags to false in humos/configs/cfg_template_test.yml. This is typically faster.
METRICS:
DYN_STABILITY: False
RECONS: False
PHYSICS: False
MOTION_PRIOR: FalseCurrently, the motions are rendered using the AitViewer and visualized in Wandb. You can change the visualizer to whatever you prefer by modifying the on_validation_epoch_end() function in humos/src/model/tmr_cyclic.py.
python humos/train.py --cfg humos/configs/cfg_template.ymlThe following script runs the HUMOS demo by randomly mixing AMASS identities and saves the sequence of meshes as an obj at ./demo/humos_{ckpt_name}_fit_objs/pred. The sequences are also visualized in Wandb.
python humos/train.py --cfg humos/configs/cfg_template_demo.yml| Generate videos for identity B | Generated video for identity B overlaid with input motion A |
|
|
If you find this code useful for your research, please consider citing the following papers:
@InProceedings{tripathi2024humos,
author = {Tripathi, Shashank and Taheri, Omid and Lassner, Christoph and Black, Michael J. and Holden, Daniel and Stoll, Carsten},
title = {{HUMOS}: Human Motion Model Conditioned on Body Shape},
booktitle = {European Conference on Computer Vision},
organization = {Springer},
year = {2025},
pages = {133--152},
}Several parts of this code are heavily derived from the IPMAN and TMR. Please also consider citing this work:
@inproceedings{tripathi2023ipman,
title = {{3D} Human Pose Estimation via Intuitive Physics},
author = {Tripathi, Shashank and M{\"u}ller, Lea and Huang, Chun-Hao P. and Taheri Omid and Black, Michael J. and Tzionas, Dimitrios},
booktitle = {Conference on Computer Vision and Pattern Recognition ({CVPR})},
pages = {4713--4725},
year = {2023},
url = {https://ipman.is.tue.mpg.de}
}@inproceedings{petrovich23tmr,
title = {{TMR}: Text-to-Motion Retrieval Using Contrastive {3D} Human Motion Synthesis},
author = {Petrovich, Mathis and Black, Michael J. and Varol, G{\"u}l},
booktitle = {International Conference on Computer Vision ({ICCV})},
year = {2023}
}See LICENSE.
We sincerely thank Tsvetelina Alexiadis, Alpar Cseke, Tomasz Niewiadomski, and Taylor McConnell for facilitating the perceptual study, and Giorgio Becherini for his help with the Rokoko baseline. We are grateful to Iain Matthews, Brian Karis, Nikos Athanasiou, Markos Diomataris, and Mathis Petrovich for valuable discussions and advice. Their invaluable contributions enriched this research significantly.
For technical questions, please create an issue. For other questions, please contact [email protected].


