This repository contains the official implementation for Lite2Relight.
- We recommend Linux for performance and compatibility reasons.
- 1 high-end NVIDIA GPU.
- 64-bit Python 3.10 and PyTorch 2.5.1 (or later).
- CUDA toolkit 12.1 or later.
- Python libraries: see requirements.txt for library dependencies. You can use the following commands with Miniconda3 to create and activate your Python environment:
conda create --name lite2relight --file requirements.txtconda activate lite2relight
Pre-trained networks are stored as *.pkl or *.pt files that can be referenced using local filenames. Please download the models from this Google Drive link. After downloading pretrained.zip, extract it and place the checkpoints and pretrained_models folders in the main directory of this repository.
The sample directory provides a quick way to get started. It is structured as follows:
sample/
├── dataset
│ └── ID00600
├── envmaps
│ ├── EMAP-0059.png
│ ├── ...
│ └── EMAP-335.png
└── in-the-wild
└── ID00600.jpg
- In-the-wild images: The
sample/in-the-wildfolder contains the test images. - Preprocessing: To process these images, please follow the data processing steps for in-the-wild portraits as described in the EG3D repository. After preprocessing, you should have a processed folder like
sample/dataset/ID00600. - Environment Maps: The
sample/envmapsfolder contains sample environment maps. These are 20x10 HDR environment maps in.pngformat. You can downsample your own HDR environment maps to this format.
You can run inference on the sample data using the provided script. Before running, make sure to update the checkpoint paths in scripts/inference.sh.
bash scripts/inference.shThis will run infer_relit.py with the appropriate arguments and save the results in the results/release/ directory.
Training code will be released using the FaceOLAT dataset, which can be found at the 3DPR project page.
This work is built upon the following amazing projects. We thank the authors for their great work.
- EG3D: Efficient Geometry-aware 3D Generative Adversarial Networks
- GOAE: A Triptych of GOAEs for Cross-Domain Man-in-the-Middle Attacks and Defenses
If you find our code or paper useful, please cite as:
@article{prao2024lite2relight,
title = {Lite2Relight: 3D-aware Single Image Portrait Relighting},
author = {Rao, Pramod and Fox, Gereon and Meka, Abhimitra and B R, Mallikarjun and Zhan, Fangneng and Weyrich, Tim and Bickel, Bernd and Seidel, Hans-Peter and Pfister, Hanspeter and Matusik, Wojciech and Elgharib, Mohamed and Theobalt, Christian },
booktitle = {ACM SIGGRAPH 2024 Conference Proceedings},
year={2024}
}
