Instant-ngp (only NeRF) in pytorch+cuda trained with pytorch-lightning (high quality with high speed). This repo aims at providing a concise pytorch interface to facilitate future research, and am grateful if you can share it (and a citation is highly appreciated)!
Other representative videos are in GALLERY.md
This implementation has strict requirements due to dependencies on other libraries, if you encounter installation problem due to hardware/software mismatch, I'm afraid there is no intention to support different platforms (you are welcomed to contribute).
Clone this repo by git clone https://github.com/kwea123/ngp_pl
Python>=3.8 (installation via anaconda is recommended, use conda create -n ngp_pl python=3.8
to create a conda environment and activate it by conda activate ngp_pl
)
Python libraries
pip install torch==1.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
torch-scatter
following their instructiontinycudann
following their instruction (pytorch extension)apex
following their instructionpip install -r requirements.txt
Cuda extension: Upgrade pip
to >= 22.1 and run pip install models/csrc/
(please run this each time you pull
the code)
Download preprocessed datasets (Synthetic_NeRF
, Synthetic_NSVF
, BlendedMVS
, TanksAndTemples
) from NSVF. Do not change the folder names since there is some hard-coded fix in my dataloader.
Download data from here.
For custom data, run colmap
and get a folder sparse/0
under which there are cameras.bin
, images.bin
and points3D.bin
. The following data with colmap format are also supported:
Download data from here. To convert the hdr images into ldr images for training, run python misc/prepare_rtmv.py <path/to/RTMV>
, it will create images/
folder under each scene folder, and will use these images to train (and test).
Quickstart: python train.py --root_dir <path/to/lego> --exp_name Lego
It will train the Lego scene for 30k steps (each step with 8192 rays), and perform one testing at the end. The training process should finish within about 5 minutes (saving testing image is slow, add --no_save_test
to disable). Testing PSNR will be shown at the end.
More options can be found in opt.py.
For other public dataset training, please refer to the scripts under benchmarking
.
Use test.ipynb
to generate images. Lego pretrained model is available here
GUI usage: run python show_gui.py
followed by the same hyperparameters used in training (dataset_name
, root_dir
, etc) and add the checkpoint path with --ckpt_path <path/to/.ckpt>
I compared the quality (average testing PSNR on Synthetic-NeRF
) and the inference speed (on Lego
scene) v.s. the concurrent work torch-ngp (default settings) and the paper, all trained for about 5 minutes:
Method | avg PSNR | FPS | GPU |
---|---|---|---|
torch-ngp | 31.46 | 18.2 | 2080 Ti |
mine | 32.96 | 36.2 | 2080 Ti |
instant-ngp paper | 33.18 | 60 | 3090 |
As for quality, mine is slightly better than torch-ngp, but the result might fluctuate across different runs.
As for speed, mine is faster than torch-ngp, but is still only half fast as instant-ngp. Speed is dependent on the scene (if most of the scene is empty, speed will be faster).
Left: torch-ngp. Right: mine.
To run benchmarks, use the scripts under benchmarking
.
Followings are my results trained using 1 RTX 2080 Ti (qualitative results here):