Hugo Blanc · Jean-Emmanuel Deschaud · Alexis Paljic
We present an enhanced differentiable ray-casting algorithm for rendering Gaussians with scene features, enabling efficient 3D scene learning from images.
The following software components are required to ensure compatibility and optimal performance:
Follow the steps below to set up the project:
#Python-Optix requirements
export OPTIX_PATH=/path/to/optix
#For example, if the repo is in your home folder: export OPTIX_PATH=~/NVIDIA-OptiX-SDK-7.6.0-linux64-x86_64/
export OPTIX_EMBED_HEADERS=1 # embed the optix headers into the package
git clone https://github.com/hugobl1/ray_gauss.git
cd ray_gauss
conda env create --file environment.yml
conda activate ray_gauss
Follow the steps below to set up the project:
git clone https://github.com/hugobl1/ray_gauss.git
cd ray_gauss
You need to install CUDA Toolkit on Windows, if possible version 12.4: https://developer.nvidia.com/cuda-12-4-1-download-archive?target_os=Windows&target_arch=x86_64
If you already have a CUDA version on your Windows, you need to change the environment.yml
file to match the CUDA version you installed in Windows. For example, if you have CUDA 11.8:
Comment the line python-optix in the environment.yml
file (python-optix needs to be installed from source on Windows)
Now, you can create the ray_gauss conda env:
conda env create --file environment.yml
conda activate ray_gauss
Install python-optix from source:
git clone https://github.com/mortacious/python-optix
cd python-optix
set OPTIX_PATH=\path\to\optix
#For example, the repo is by default on C disk: set OPTIX_PATH=C:\ProgramData\NVIDIA Corporation\OptiX SDK 7.6.0
set OPTIX_EMBED_HEADERS=1 # embed the optix headers into the package
pip install .
You can now train your own model:
cd..
python main_train.py
Please download and unzip nerf_synthetic.zip in the dataset
folder. The folder contains initialization point clouds and the NeRF-Synthetic dataset.
If you would like to directly visualize a model trained by RayGauss, we provide the trained point clouds for each scene in NeRF-Synthetic. In this case, you can skip the training of the scene and evaluate or visualize it directly: Download Link.
Please download the data from the Mip-NeRF 360 website.
Place the datasets in the dataset
folder.
To reproduce the results on entire datasets, follow the instructions below:
Prepare the Dataset: Ensure the NeRF-Synthetic dataset is downloaded and placed in the dataset
directory.
Run Training Script: Execute the following command:
bash nerf_synth.sh
This will start the training and evaluation on the NeRF-Synthetic dataset with the configuration parameter in nerf_synthetic.yml
.
To reproduce results on the Mip-NeRF 360 dataset:
Prepare the Dataset: Download and place the Mip-NeRF 360 dataset in the dataset
directory.
Run Training Script: Execute the following command:
bash mip_nerf360.sh
Results: The results for each scene can be found in the output
folder after training is complete.
To train and test a single scene, simply use the following commands:
python main_train.py -config "path_to_config_file" --save_dir "name_save_dir" --arg_names scene.source_path --arg_values "scene_path"
python main_test.py -output "./output/name_save_dir" -iter save_iter
# For example, to train and evaluate the hotdog scene from NeRF Synthetic:
# python main_train.py -config "./configs/nerf_synthetic.yml" --save_dir "hotdog" --arg_names scene.source_path --arg_values "./dataset/nerf_synthetic/hotdog"
# python main_test.py -output "./output/hotdog" -iter 30000
By default, only the last iteration is saved (30000 in the base config files).
To extract a point cloud in PLY format from a trained scene, we provide the script convertpth_to_ply.py, which can be used as follows:
python convertpth_to_ply.py -output "./output/name_scene" -iter num_iter
# For example, if the 'hotdog' scene was trained for 30000 iterations, you can use:
# python convertpth_to_ply.py -output "./output/hotdog" -iter 30000
The generated PLY point cloud will be located in the folder ./output/scene/saved_pc/
.
To visualize a trained scene, we provide the script main_gui.py, which opens a GUI to display the trained scene:
# Two ways to use the GUI:
# Using the folder of the trained scene and the desired iteration
python main_gui.py -output "./output/name_scene" -iter num_iter
# Using a PLY point cloud:
python main_gui.py -ply_path "path_to_ply_file"
In First Person mode, you can use the keyboard keys to move the camera in different directions.
Direction Keys:
Z
: Move forwardQ
: Move backwardS
: Move leftD
: Move rightA
: Move downE
: Move up View Control with Right Click:
Note: Ensure that the First Person camera mode is active for these controls to work.
In Trackball mode, the camera can be controlled with the mouse to freely view around an object.
Note: Ensure that the Trackball camera mode is active for these controls to work.
To render a camera path from a trained point cloud, use the script as follows:
python render_camera_path.py -output "./output" -camera_path_filename "camera_path.json" -name_video "my_video"
The camera_path.json
file, which defines the camera path, can be generated using NeRFStudio.
This script loads a pre-trained model, renders images along a specified camera path, and saves them in output/camera_path/images/
. A video is then generated from the images and saved in output/camera_path/video/
.
To use a dataset created with Reality Capture, refer to the Reality Capture Instructions.
We thank the authors of Python-Optix, upon which our project is based, as well as the authors of NeRF and Mip-NeRF 360 for providing their datasets. Finally, we would like to acknowledge the authors of 3D Gaussian Splatting, as our project's dataloader is inspired by the one used in 3DGS; and Mip-Splatting for the calculation of the minimum sizes of the Gaussians as a function of the cameras.
If you find our code or paper useful, please cite
@misc{blanc2024raygaussvolumetricgaussianbasedray,
title={RayGauss: Volumetric Gaussian-Based Ray Casting for Photorealistic Novel View Synthesis},
author={Hugo Blanc and Jean-Emmanuel Deschaud and Alexis Paljic},
year={2024},
eprint={2408.03356},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.03356},
}