conda create -n SMat python=3.8
conda activate SMat
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
pip install -r requirements.txt- Download the pretrained models and put them in the './ckpt' dir. Download it from GoogleDrive or BaiduYunPan
- Input the following commands in terminal.
sh app_inference.sh- Modify L27-44 in inference_dataset.py with the correct path
- Modify the benchmark you want to validate in inference_dataset.sh
- Run the following command.
sh inference_dataset.sh@inproceedings{ye2024unifying,
title={Unifying Automatic and Interactive Matting with Pretrained ViTs},
author={Ye, Zixuan and Liu, Wenze and Guo, He and Liang, Yujia and Hong, Chaoyi and Lu, Hao and Cao, Zhiguo},
booktitle={Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
Our codebase builds on ViTMatte. Thanks the authors for sharing their awesome codebases!
We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.


