Yixuan Zhu*, Haolin Wang*, Shilin Ma*, Wenliang Zhao, Yansong Tang,
$\dagger$ Jie Zhou,$\dagger$ * Equal contribution
$\dagger$ Corresponding author
The repository contains the official implementation for the paper "FADE: Frequency-Aware Diffusion Model Factorization for Video Editing" (CVPR 2025).
FADE, which refers to Frequency-Aware Diffusion Model Factorization for Video Editing, is a training-free yet highly effective video editing approach that fully leverages the inherent priors from pre-trained video diffusion.
- ☑️ Release the code for video editing
- ☑️ Release the paper
We recommend you to use an Anaconda virtual environment. If you have installed Anaconda, run the following commands to create and activate a virtual environment.
conda create --name FADE python=3.10
conda activate FADE
pip install -r requirements.txtWe use Cogvideo-5B as our foundation model, please download this model.
pip install huggingface_hub
huggingface-cli login
hf download zai-org/CogVideoX-5bWe utilize HybridGL to generate mask used for editing, please clone this respository and set up the environment by running the following command.
cd HybridGL
python -m spacy download en_core_web_lg
cd third_party
cd modified_CLIP
pip install -e . --no-build-isolation
cd ..
cd segment-anything
pip install -e .
cd ../..
mkdir checkpoints
cd checkpoints
wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pthRun editing with CogVideoX-5b: bash edit_pipeline.sh
-
Upload video to
inputfolder. -
Modify
edit_pipeline.sh, especiallyinit_prompt,edit_prompt,input_video_path, and other related parameters. -
Create a config file by following the format of
configs/bear.yaml. The trade-off between preservation and editing can be tuned by adjusting self_attn_gs.
Run bash edit_pipeline.sh
This project is licensed under the MIT License.
If you use this code for your research, please cite our paper:
@misc{zhu2025fadefrequencyawarediffusionmodel,
title={FADE: Frequency-Aware Diffusion Model Factorization for Video Editing},
author={Yixuan Zhu and Haolin Wang and Shilin Ma and Wenliang Zhao and Yansong Tang and Lei Chen and Jie Zhou},
year={2025},
eprint={2506.05934},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.05934},
}
