SE-ORNet: Self-Ensembling Orientation-aware Network for Unsupervised Point Cloud Shape Correspondence
PyTorch implementation for our CVPR 2023 paper SE-ORNet.
[Project Webpage] [Paper]
- 28. February 2023: SE-ORNet is accepted at CVPR 2023. π₯
- 10. April 2023: SE-ORNet preprint released on arXiv.
- Coming Soon: Code will be released soon.
-
Create a virtual environment via
conda.conda create -n se_ornet python=3.10 -y conda activate se_ornet
-
Install
torchandtorchvision.conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch -y
-
Install environments
sh setup.sh
βββ SE-ORNet
β βββ __init__.py
β βββ train.py <- the main file
β βββ models
β β βββ metrics
β β βββ modules
β β βββ runners
β β βββ correspondence_utils.py
β β βββ data_augment_utils.py
β β βββ shape_corr_trainer.py
β βββ utils
β β βββ __init__.py
β β βββ argparse_init.py
β β βββ cyclic_scheduler.py
β β βββ model_checkpoint_utils.py
β β βββ pytorch_lightning_utils.py
β β βββ switch_functions.py
β β βββ tensor_utils.py
β β βββ warmup.py
β βββ visualization
β β βββ __init__.py
β β βββ mesh_container.py
β β βββ mesh_visualization_utils.py
β β βββ mesh_visualizer.py
β β βββ orca_xvfb.bash
β β βββ visualize_api.py
β βββ ChamferDistancePytorch
βββ data
β βββ point_cloud_db
β βββ __init__.py
β βββ generate_smal.md
βββ .gitignore
βββ .gitmodules
βββ README.md
βββ LICENSE
The main dependencies of the project are the following:
python: 3.10
cuda: 11.3
pytorch: 1.12.1The method was evaluated on:
-
SURREAL
- 230k shapes (DPC uses the first 2k).
- Dataset website
- This code downloads and preprocesses SURREAL automatically.
-
SHRECβ19
- 44 Human scans.
- Dataset website
- This code downloads and preprocesses SURREAL automatically.
-
SMAL
- 10000 animal models (2000 models per animal, 5 animals).
- Dataset website
- Due to licencing concerns, you should register to SMAL and download the dataset.
- You should follow data/generate_smal.md after downloading the dataset.
- To ease the usage of this benchmark, the processed dataset can be downloaded from here. Please extract and put under
data/datasets/smal
-
TOSCA
- 41 Animal figures.
- Dataset website
- This code downloads and preprocesses TOSCA automatically.
- To ease the usage of this benchmark, the processed dataset can be downloaded from here. Please extract and put under
data/datasets/tosca
The metrics are obtained in 5 training runs followed by 5 test runs. We report both the best and the average values (the latter are given in round brackets).
Human Datasets
| Dataset | [email protected] | [email protected] | Download |
|---|---|---|---|
| SHRECβ19 | 17.5 (16.8) | 5.1 (5.6) | model |
| SURREAL | 22.3 (21.3) | 4.5 (4.8) | model |
Animal Datasets
| Dataset | [email protected] | [email protected] | Download |
|---|---|---|---|
| TOSCA | 40.8 (38.1) | 2.7 (2.8) | model |
| SMAL | 38.3 (36.2) | 3.3 (3.8) | model |
For training run
python train.py --dataset_name <surreal/tosca/shrec/smal>
The code is based on PyTorch-Lightning, all PL hyperparameters are supported.
For testing, simply add --do_train false flag, followed by --resume_from_checkpoint with the relevant checkpoint.
python train.py --do_train false --resume_from_checkpoint <path>
Test phase visualizes each sample, for faster inference pass --show_vis false.
We provide a trained checkpoint repreducing the results provided in the paper, to test and visualize the model run
python train.py --show_vis --do_train false --resume_from_checkpoint data/ckpts/surreal_ckpt.ckpt
If you like our work and use the codebase or models for your research, please cite our work as follows.
@inproceedings{
Deng2023seornet,
title={{SE}-{ORN}et: Self-Ensembling Orientation-aware Network for Unsupervised Point Cloud Shape Correspondence},
author={Jiacheng Deng, ChuXin Wang, Jiahao Lu, Jianfeng He, Tianzhu Zhang, Jiyang Yu, Zhe Zhang},
booktitle={Conference on Computer Vision and Pattern Recognition 2023},
year={2023},
url={https://openreview.net/forum?id=DS6AyDWnAv}
}






