This project combines state-of-the-art 3D rendering techniques, including NeRF (Neural Radiance Fields) and Gaussian Splatting, to create a highly detailed and interactive digital twin of a Seat car. The digital twin enables real-time visualization, simulation, and analysis for automotive applications.
NeRF is a neural rendering technique that represents scenes as neural radiance fields. It uses a fully connected neural network to map spatial locations and viewing directions to color and opacity, enabling photorealistic novel-view synthesis. While NeRF achieves impressive results, it is computationally intensive and requires significant time for training and rendering.
Gaussian Splatting is a novel approach to 3D scene representation and rendering. It uses 3D Gaussians to model scenes, optimizing their density and anisotropic covariance for accurate representation. This method achieves state-of-the-art visual quality while enabling real-time rendering at 1080p resolution, making it ideal for applications requiring both speed and quality.
The primary objective of this project is to create a digital twin of a Seat car, which can be used for:
- Real-time visualization: Explore the car's interior and exterior in a virtual environment.
- Simulation and analysis: Test various scenarios, such as lighting conditions or material changes.
- Marketing and training: Provide an interactive experience for customers and staff.
- A CUDA-enabled GPU with at least 24 GB VRAM for training.
- Python 3.8 or higher.
- Conda for environment management.
-
Clone the repository:
git clone git@github.com:cyu60/hackupc.git --recursive cd hackupc -
Set up the environment:
conda env create -f environment.yml conda activate seat-car
-
Download the required datasets and pre-trained models:
bash download_example_data.sh
To train a NeRF or Gaussian Splatting model on your dataset:
python train.py --config config_fern.txtTo render a trained model:
python render.py --model_path <path_to_model>This section provides detailed instructions on how to train NeRF and Gaussian Splatting models using the downloaded repositories.
Before training, you'll need a set of images of the car from multiple viewpoints:
-
Capture Requirements:
- Consistent lighting conditions
- Overlap between images (30-50%)
- Coverage of all areas you want to reconstruct
-
Data Organization:
- For NeRF: Place your images in a folder structure as per LLFF format
- For Gaussian Splatting: Follow the COLMAP dataset structure
Gaussian Splatting generally provides faster training and real-time rendering capabilities, making it ideal for interactive applications.
First, convert your raw images to the required format:
cd gaussian-splatting-main
python convert.py -s /path/to/your/car/images --resizeThis script:
- Runs COLMAP to extract camera poses
- Undistorts images
- Creates the necessary folder structure
python train.py -s /path/to/processed/car/dataAdvanced options for better results:
# For high-quality results
python train.py -s /path/to/processed/car/data --iterations 30000 --resolution 2
# For real-time optimization
python train.py -s /path/to/processed/car/data --iterations 15000 --resolution 1 --position_lr_init 0.00016You can monitor the training progress using the provided network viewer:
cd SIBR_viewers/bin
./SIBR_remoteGaussian_appThis allows you to see the 3D model as it's being trained.
NeRF may provide higher quality results for complex lighting conditions but requires more training time and is slower to render.
For LLFF-format data (real captured images):
cd nerf-master
python imgs2poses.py /path/to/your/car/imagesCreate a configuration file for your dataset (similar to config_fern.txt):
cd nerf-master
# Example configuration file creation
echo "expname = seat_car
datadir = /path/to/car/data
dataset_type = llff
no_batching = True
use_viewdirs = True
white_bkgd = False
N_samples = 64
N_importance = 128
llffhold = 8" > config_seat_car.txt
# Run training
python run_nerf.py --config config_seat_car.txtFor best results:
- Train for at least 200,000 iterations (may take 15+ hours on a single GPU)
- Use
tensorboard --logdir=logs/summariesto monitor training progress - Final model and renderings will be saved in the
logs/[expname]directory
-
Interior Scans:
- Use Gaussian Splatting with depth regularization for better results
python train.py -s /path/to/data -d /path/to/depth_maps --depth_sil_weight 0.1
-
Glossy Surfaces:
- Lower learning rates can help with reflective surfaces
python train.py -s /path/to/data --position_lr_init 0.000016 --scaling_lr 0.001
-
Exposure Variations:
- Enable exposure compensation for outdoor captures:
python train.py -s /path/to/data --exposure_lr_init 0.001 --exposure_lr_final 0.0001
-
Multi-View Consistency:
- Add anti-aliasing for better results across viewpoints:
python train.py -s /path/to/data --antialiasing
After training, you can render your model or view it interactively:
cd SIBR_viewers/bin
./SIBR_gaussianViewer_app -m /path/to/trained/modelcd nerf-master
python run_nerf.py --config config_seat_car.txt --render_onlyRendered images and videos will be saved in the output directory.
This project is hosted on GitHub. You can find the repository here: Seat Car Digital Twin Repository
This project builds upon the work of:
Special thanks to the authors of these methods for their groundbreaking contributions to 3D rendering and scene representation.

