Skip to content

zixuan-ye/SmartMatting

Repository files navigation

[CVPR2024] Unifying Automatic and Interactive Matting with Pretrained ViTs

ImageImage

⚙️ Setup

Install Environment via Anaconda (Recommended)

conda create -n SMat python=3.8
conda activate SMat
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
pip install -r requirements.txt

💫 Inference

Local Gradio demo

  1. Download the pretrained models and put them in the './ckpt' dir. Download it from GoogleDrive or BaiduYunPan
  2. Input the following commands in terminal.
  sh app_inference.sh

Example Image

Inference Matting Datasets

  1. Modify L27-44 in inference_dataset.py with the correct path
  2. Modify the benchmark you want to validate in inference_dataset.sh
  3. Run the following command.
  sh inference_dataset.sh

😉 Citation

@inproceedings{ye2024unifying,
      title={Unifying Automatic and Interactive Matting with Pretrained ViTs}, 
      author={Ye, Zixuan and Liu, Wenze and Guo, He and Liang, Yujia and Hong, Chaoyi and Lu, Hao and Cao, Zhiguo},
      booktitle={Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      year={2024}
}

🤗 Acknowledgements

Our codebase builds on ViTMatte. Thanks the authors for sharing their awesome codebases!

📢 Disclaimer

We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors