Hik289 / gnp_from-others

0 stars 0 forks source link

torch-ngp

A pytorch implementation of instant-ngp, as described in Instant Neural Graphics Primitives with a Multiresolution Hash Encoding.

A GUI for training/visualizing NeRF is also available!

https://user-images.githubusercontent.com/25863658/155265815-c608254f-2f00-4664-a39d-e00eae51ca59.mp4

Install

git clone --recursive https://github.com/ashawkey/torch-ngp.git
cd torch-ngp

Install requirements with pip

pip install -r requirements.txt

# (optional) install the tcnn backbone
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Install requirements with conda

conda env create -f environment.yml
conda activate torch-ngp

Tested environments

Currently, --ff only supports GPUs with CUDA architecture >= 70. For GPUs with lower architecture, --tcnn can still be used, but the speed will be slower compared to more recent GPUs.

Usage

We use the same data format as instant-ngp, e.g., armadillo and fox. Please download and put them under ./data.

First time running will take some time to compile the CUDA extensions.

### HashNeRF
# train with different backbones (with slower pytorch ray marching)
# for the colmap dataset, the default dataset setting `--mode colmap --bound 2 --scale 0.33` is used.
python main_nerf.py data/fox --workspace trial_nerf # fp32 mode
python main_nerf.py data/fox --workspace trial_nerf --fp16 # fp16 mode (pytorch amp)
python main_nerf.py data/fox --workspace trial_nerf --fp16 --ff # fp16 mode + FFMLP (this repo's implementation)
python main_nerf.py data/fox --workspace trial_nerf --fp16 --tcnn # fp16 mode + official tinycudann's encoder & MLP

# test mode
python main_nerf.py data/fox --workspace trial_nerf --fp16 --ff --test

# use CUDA to accelerate ray marching (much more faster!)
python main_nerf.py data/fox --workspace trial_nerf --fp16 --ff --cuda_ray # fp16 mode + FFMLP + cuda raymarching

# preload data into GPU, accelerate training but use more GPU memory.
python main_nerf.py data/fox --workspace trial_nerf --fp16 --ff --preload

# construct an error_map for each image, and sample rays based on previous training error (experimental)
python main_nerf.py data/fox --workspace trial_nerf --fp16 --ff --preload --error_map

# one for all: -O means --fp16 --cuda_ray --preload, which usually gives the best results balanced on speed & performance.
python main_nerf.py data/fox --workspace trial_nerf -O

# start a GUI for NeRF training & visualization
# always use with `--fp16 --ff/tcnn --cuda_ray` for an acceptable framerate!
python main_nerf.py data/fox --workspace trial_nerf -O --gui

# test mode for GUI
python main_nerf.py data/fox --workspace trial_nerf -O --gui --test

# for the blender dataset, you should add `--mode blender --bound 1.0 --scale 0.8 --dt_gamma 0`
# --mode specifies dataset type ('blender' or 'colmap')
# --bound means the scene is assumed to be inside box[-bound, bound]
# --scale adjusts the camera locaction to make sure it falls inside the above bounding box. 
# --dt_gamma controls the adaptive ray marching speed, set to 0 turns it off.
python main_nerf.py data/nerf_synthetic/lego --workspace trial_nerf -O --mode blender --bound 1.0 --scale 0.8 --dt_gamma 0 
python main_nerf.py data/nerf_synthetic/lego --workspace trial_nerf -O --mode blender --bound 1.0 --scale 0.8 --dt_gamma 0 --gui

# for the LLFF dataset, you should first convert it to nerf-compatible format:
python llff2nerf.py data/nerf_llff_data/fern # by default it use full-resolution images, and write `transforms.json` to the folder
python llff2nerf.py data/nerf_llff_data/fern --images images_4 --downscale 4 # if you prefer to use the low-resolution images
# then you can train as a colmap dataset (you'll need to tune the scale & bound if necessary):
python main_nerf.py data/nerf_llff_data/fern --workspace trial_nerf -O
python main_nerf.py data/nerf_llff_data/fern --workspace trial_nerf -O --gui

# for custom dataset, you should:
# 1. take a video / many photos from different views 
# 2. put the video under a path like ./data/custom/video.mp4 or the images under ./data/custom/images/*.jpg.
# 3. call the preprocess code: (should install ffmpeg and colmap first! refer to the file for more options)
python colmap2nerf.py --video ./data/custom/video.mp4 --run_colmap # if use video
python colmap2nerf.py --images ./data/custom/images/ --run_colmap # if use images
# 4. it should create the transform.json, and you can train with: (you'll need to try with different scale & bound & dt_gamma to make the object correctly located in the bounding box and render fluently.)
python main_nerf.py data/custom --workspace trial_nerf_custom -O --gui --scale 2.0 --bound 1.0 --dt_gamma 0.02

### SDF
python main_sdf.py data/armadillo.obj --workspace trial_sdf
python main_sdf.py data/armadillo.obj --workspace trial_sdf --fp16
python main_sdf.py data/armadillo.obj --workspace trial_sdf --fp16 --ff
python main_sdf.py data/armadillo.obj --workspace trial_sdf --fp16 --tcnn
python main_sdf.py data/armadillo.obj --workspace trial_sdf --fp16 --ff --test

### TensoRF
# almost the same as HashNeRF, just replace the main script.
python main_tensoRF.py data/fox --workspace trial_tensoRF -O
python main_tensoRF.py data/nerf_synthetic/lego --workspace trial_tensoRF -O --mode blender --bound 1.0 --scale 0.8 --dt_gamma 0 

### DyNerf
####  Dataset: /home/xuhangkun/Metaverse/recon_3d/dataset/coffee_martini
#### Train:
python main_dynerf.py /path/to/dataset --workspace /path/to/workspace -O --scale 0.1
# some parameters
# error_map : isg , ist (refer to DyNerf paper : Ray Importance Sample), error_map

#### Test
python main_dynerf.py /path/to/dataset --workspace /path/to/workspace -O --scale 0.1 --test

check the scripts directory for more provided examples.

Performance Reference

Tested with the default settings on the Lego test dataset. Here the speed refers to the iterations per second on a TITAN RTX. Model PSNR Train Speed Test Speed
HashNeRF (fp16 + ff) 32.84 22 0.54
HashNeRF (fp16 + cuda_ray + ff) 32.81 80 7.0
TensoRF (fp16) 33.81 18 0.53
TensoRF (fp16 + cuda_ray) 33.83 46 3.4

Difference from the original implementation

Progress

As the official pytorch extension tinycudann has been released, the following implementations can be used as modular alternatives. The performance and speed of these modules are guaranteed to be on-par, and we support using tinycudann as the backbone by the --tcnn flag.

Update Log

Acknowledgement