Skip to content

[ICLR 2025] VideoShield: Regulating Diffusion-based Video Generation Models via Watermarking (Official Implementation)

License

Notifications You must be signed in to change notification settings

hurunyi/VideoShield

Repository files navigation

[ICLR2025] VideoShield: Regulating Diffusion-based Video Generation Models via Watermarking

Official implementation of VideoShield: Regulating Diffusion-based Video Generation Models via Watermarking.

Video Examples

ModelScope

Watermarked Tampered GT Mask Pred Mask
Image Image Image Image
Image Image Image Image
Image Image Image Image
Image Image Image Image

Stable-Video-Diffusion

Watermarked Tampered GT Mask Pred Mask
Image Image Image Image
Image Image Image Image
Image Image Image Image
Image Image Image Image

Environment Setup

pip install -r requirements.txt

Model Download

Download the video model to your preferred directory.

Running the Scripts

1. Watermark Embedding and Extraction

  • For ModelScope:
python3 watermark_embedding_and_extraction.py \
	--device 'cuda:0' \
	--model_name modelscope \
	--model_path <your_model_path> \
	--num_frames 16 \
	--height 256 \
	--width 256 \
	--frames_copy 8 \
	--hw_copy 4 \
	--channel_copy 1 \
	--num_inference_steps 25
  • For Stable-Video-Diffusion:
python3 watermark_embedding_and_extraction.py \
	--device 'cuda:0' \
	--model_name stable-video-diffusion \
	--model_path <your_model_path> \
	--num_frames 16 \
	--height 512 \
	--width 512 \
	--frames_copy 8 \
	--hw_copy 8 \
	--channel_copy 1 \
	--num_inference_steps 25

Note:

  • You can also skip specifying --model_path (skip Model Download). The script will automatically download the model to the default cache directory.
  • The generated watermarked video and watermark information will be saved in the ./results directory by default.

2. Temporal Tamper Localization

  • For ModelScope:
python3 temporal_tamper_localization.py \
	--device 'cuda:0' \
	--model_name modelscope \
	--model_path <your_model_path> \
	--num_inversion_steps 25 \
	--video_frames_dir './results/modelscope/a_red_panda_eating_leaves/wm/frames'
  • For Stable-Video-Diffusion:
python3 temporal_tamper_localization.py \
	--device 'cuda:0' \
	--model_name stable-video-diffusion \
	--model_path <your_model_path> \
	--num_inversion_steps 25 \
	--video_frames_dir './results/modelscope/a_red_panda_eating_leaves/wm/frames'

Note:

  • Default video frames directory: './results/stable-video-diffusion/a_red_panda_eating_leaves/wm/frames' (can be modified as needed)

3. Spatial Tamper Localization

  • For ModelScope:
python3 spatial_tamper_localization.py \
	--device 'cuda:0' \
	--model_name modelscope \
	--model_path <your_model_path> \
	--num_inversion_steps 25 \
	--video_frames_dir './results/modelscope/a_red_panda_eating_leaves/wm/frames'
  • For Stable-Video-Diffusion:
python3 spatial_tamper_localization.py \
	--device 'cuda:0' \
	--model_name stable-video-diffusion \
	--model_path <your_model_path> \
	--num_inversion_steps 25 \
	--video_frames_dir './results/modelscope/a_red_panda_eating_leaves/wm/frames'

Note:

  • Default video frames directory: './results/stable-video-diffusion/a_red_panda_eating_leaves/wm/frames' (can be modified as needed)
  • The tampered watermarked video, gt mask and pred mask will be saved in the ./results directory by default.

Acknowledgements

This code builds on the code from the GaussianShading.

Cite

If you find this repository useful, please consider giving a star ⭐ and please cite as:

@inproceedings{hu2025videoshield,
  title={VideoShield: Regulating Diffusion-based Video Generation Models via Watermarking}, 
  author={Runyi Hu and Jie Zhang and Yiming Li and Jiwei Li and Qing Guo and Han Qiu and Tianwei Zhang},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2025}
}

About

[ICLR 2025] VideoShield: Regulating Diffusion-based Video Generation Models via Watermarking (Official Implementation)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages