Clément Jambon, Bernhard Kerbl, Georgios Kopanas, Stavros Diolatzis, Thomas Leimkühler, George Drettakis
| Webpage | Full Paper | Dataset | Video |
| Other GRAPHDECO Publications | FUNGRAPH project page |
Abstract: Neural Radiance Fields (NeRFs) have revolutionized novel view synthesis for captured scenes, with recent methods allowing interactive free-viewpoint navigation and fast training for scene reconstruction. However, the implicit representations used by these methods — often including neural networks and complex encodings — make them difficult to edit. Some initial methods have been proposed, but they suffer from limited editing capabilities and/or from a lack of interactivity, and are thus unsuitable for interactive editing of captured scenes. We tackle both limitations and introduce NeRFshop, a novel end-to-end method that allows users to interactively select and deform objects through cage-based transformations. NeRFshop provides fine scribble-based user control for the selection of regions or objects to edit, semi-automatic cage creation, and interactive volumetric manipulation of scene content thanks to our GPU-friendly two-level interpolation scheme. Further, we introduce a preliminary approach that reduces potential resulting artifacts of these transformations with a volumetric membrane interpolation technique inspired by Poisson image editing and provide a process that "distills" the edits into a standalone NeRF representation.
This research was funded by the ERC Advanced grant FUNGRAPH No 788065 http://fungraph.inria.fr. The authors are grateful to the OPAL infrastructure from Université Côte d’Azur for providing resources and support. The authors would also like to thank Adobe for their generous research and software donations.
@Article{NeRFshop23,
author = {Jambon, Cl\'ement and Kerbl, Bernhard and Kopanas, Georgios and Diolatzis, Stavros and Leimk{\"u}hler, Thomas and Drettakis, George},
title = {NeRFshop: Interactive Editing of Neural Radiance Fields},
journal = {Proceedings of the ACM on Computer Graphics and Interactive Techniques},
number = {1},
volume = {6},
month = {May},
year = {2023},
url = {https://repo-sam.inria.fr/fungraph/nerfshop/}
}
For windows, if you have a graphics card in the RTX 2000 or RTX 3000 & 4000 series, you can download the following binaries and execute the .exe
file corresponding to your device. The binaries must be run from the command line (see below).
NOTE: these binaries might not reflect the latest changes on the main branch.
pip install -r requirements.txt
.If you are using Debian based Linux distribution, install the following packages
sudo apt-get install build-essential git python3-dev python3-pip libopenexr-dev libxi-dev \
libglfw3-dev libglew-dev libomp-dev libxinerama-dev libxcursor-dev \
xorg-dev libeigen3-dev
First clone the repo, e.g., via git command line with:
git clone --recursive <link to repository>
Then, build with CMake, e.g., via command line:
cd nerfshop
cmake . -B build
If you used CMake to build a solution for your C++ IDE, you may continue by building from inside it (e.g., Microsoft Visual Studio). We recommend building RelWithDebInfo as default. Alternatively, you can build a desired configuration (e.g., Release or RelWithDebInfo) directly via command line:
cmake --build build --config RelWithDebInfo -j
If CUDA is not on your PATH, you can modify ~/.bashrc
or ~/.zshrc
(or any other shell configuration with) by adding the following lines:
export PATH="/usr/local/cuda-XX.X/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-XX.X/lib64:$LD_LIBRARY_PATH"
where XX.X
stands for your CUDA version (e.g., 11.4).
Run nerfshop from (or ensure that the working directory of the executable is) its root directory. As NeRFshop was built directly on top Instant-NGP, pre-processing and loading scenes is done in a similar fashion to the original implementation. Please refer to the corresponding documentation for details.
For example, if you followed the build instructions from above you can start our statues scene with:
./build/nerfshop --scene data/nerfshop_scenes/statues/transforms.json
If you use the pre-built Windows executables instead, you can achieve the same with:
<executable-name> --scene <path to downloaded scenes>/statues/transforms.json
Additionally, if there is a snapshot available, you can load it with:
./build/nerfshop --scene data/nerfshop_scenes/statues/transforms.json --snapshot data/nerfshop_scenes/statues/base.msgpack
NOTE: NeRFshop only supports scenes with aabb_scale
s (as defined here) ranging from 1 to 16 for now.
NeRFshop relies on a simple and intuitive workflow made of the successive steps (detailed individually in the following paragraphs):
Editing is based on composable operators. To instantiate a cage-based operator, push the Cage
button in the Editing Window. An expandable tab should then appear below.
To select an object keep key b
pressed down and paint the screen-space region of the object that you wish to capture. Once you're done, push the PROJECT
to project you're selection on the object in 3D.
Once your selection has been projected, region growing (a.k.a. flood feeling) can be performed using the button GROW REGION
.
You can adjust the number of growing steps for each iteration using the Growing steps
slider. Note that for now, there is no way to revert a growing iteration. However, you can prune the selection at any time by:
Shift
pressed and capturing points with a rectangle or keeping Ctrl
pressed and painting the points that you want to selectDelete
Alt
When you are statisfied with your current 3D selection, you can run our automatic cage-building algorithm by pushing COMPUTE PROXY
.
If you wish more control on the cage after it is built, you can tick the Cage refinement
option, push EXTRACT CAGE
, edit the resulting cage (similarly to a standard cage-based deformation as described below) and finally hit COMPUTE PROXY
to derive the final cage.
Once the cage is built, you can manipulate it by:
Shift
pressed and capture points with a rectangle or keep Ctrl
pressed and paint the points that you want to select (the size of the brush can be changed with the mouse wheel). At any time, you can clear your selection with Alt
Supported transformations are translation, rotation and scaling:
Once you are satisfied with your edit, you can visualize the final result by hiding the cage (set the "Operator Visualisation" mode as "Off") and the gizmo (by deselecting all points with "Alt"). Note that if it you have completed all your edits, you can export a standalone NeRF via distillation as described here.
In real scenes, artifacts can be carried with the cage when trying to insert a displaced object at a novel location. To mitigate the phenomenon, we propose a Membrane Interpolation Correction. To turn it on, make sure you have a valid cage and tick Apply Membrane Correction
. You can adjust the strength of the correction with the auxiliary slider.
Like with image or 3D geometry editing tools, we offer the possibility to export your edited scene into a standalone NeRF format, namely the format supported by the modern implementation of Instant-NGP (.ingp
). Once you are happy with your edits, you can click the Distill
button at the bottom of the Editing window. This will start NeRF training again, but the training will respect the operators you placed. You can stop training when the quality is sufficient for your needs. To save your distilled NeRF so it can be loaded by Instant-NGP-based applications, click the Save
button in the Snapshot
section of the main window.
This feature is in alpha and has known limitations. Distilling does not (yet) account for membrane interpolation correction. Distilling destructive edits will cause artifacts: a destructive edit is when an object A is moved or deformed such that it (partially) overlaps with another object B, and thus B is (partially) destroyed/overwritten. However, it is possible to first move B out of the way (to some empty space) and then move A to where B used to be. After distilling, you might then make object B vanish (see below).
We also provide you with the possibility of removing objects (e.g. floaters). To do so, follow the worklow described above resulting in a proxy that satisfies your needs and hit the Vanish!
button.
For floaters, you may need a more aggressive Projection Threshold. This can be achieved with the dedicated slider as shown below:
Please note that distilling will NOT consider vanished objects. Also, empty space cleaning can interfer with previously vanished objects. The suggested workflow is as thus follows:
NeRFshop supports multiple and composable edits. To do so, simply add new operators as specified earlier. In order to control the active operator, use the dedicated slider.
Clean Empty Space
. Alternatively, if your hardware performance permits it, you may also tick the checkbox next to it to have empty space cleaning happen automatically in the background.aabb_scale
../build/nerfshop --scene [SCENE] --height 720 --width 1280
Affine Duplication
operator.Below, we provide the original README for Instant-NGP as per the version that NeRFshop is based on.
Ever wanted to train a NeRF model of a fox in under 5 seconds? Or fly around a scene captured from photos of a factory robot? Of course you have!
Here you will find an implementation of four neural graphics primitives, being neural radiance fields (NeRF), signed distance functions (SDFs), neural images, and neural volumes. In each case, we train and render a MLP with multiresolution hash input encoding using the tiny-cuda-nn framework.
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller
ACM Transactions on Graphics (SIGGRAPH), July 2022
Project page / Paper / Video / BibTeX
For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing
pip install -r requirements.txt
.If you are using Debian based Linux distribution, install the following packages
sudo apt-get install build-essential git python3-dev python3-pip libopenexr-dev libxi-dev \
libglfw3-dev libglew-dev libomp-dev libxinerama-dev libxcursor-dev
Alternatively, if you are using Arch or Arch derivatives, install the following packages
sudo pacman -S base-devel cmake openexr libxi glfw openmp libxinerama libxcursor
We also recommend installing CUDA and OptiX in /usr/local/
and adding the CUDA installation to your PATH.
For example, if you have CUDA 11.4, add the following to your ~/.bashrc
export PATH="/usr/local/cuda-11.4/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-11.4/lib64:$LD_LIBRARY_PATH"
For Arch and derivatives,
sudo pacman -S cuda
Begin by cloning this repository and all its submodules using the following command:
$ git clone --recursive https://github.com/nvlabs/instant-ngp
$ cd instant-ngp
Then, use CMake to build the project: (on Windows, this must be in a developer command prompt)
instant-ngp$ cmake . -B build
instant-ngp$ cmake --build build --config RelWithDebInfo -j 16
If the build fails, please consult this list of possible fixes before opening an issue.
If the build succeeds, you can now run the code via the build/testbed
executable or the scripts/run.py
script described below.
If automatic GPU architecture detection fails, (as can happen if you have multiple GPUs installed), set the TCNN_CUDA_ARCHITECTURES
enivonment variable for the GPU you would like to use. The following table lists the values for common GPUs. If your GPU is not listed, consult this exhaustive list.
RTX 30X0 | A100 | RTX 20X0 | TITAN V / V100 | GTX 10X0 / TITAN Xp | GTX 9X0 | K80 |
---|---|---|---|---|---|---|
86 | 80 | 75 | 70 | 61 | 52 | 37 |
This codebase comes with an interactive testbed that includes many features beyond our academic publication:
NeRF->Mesh
and SDF->Mesh
conversion.One test scene is provided in this repository, using a small number of frames from a casually captured phone video:
instant-ngp$ ./build/testbed --scene data/nerf/fox
Alternatively, download any NeRF-compatible scene (e.g. from the NeRF authors' drive, the SILVR dataset, or the DroneDeploy dataset). Now you can run:
instant-ngp$ ./build/testbed --scene data/nerf_synthetic/lego/transforms_train.json
To prepare your own dataset for use with our NeRF implementation, click here.
instant-ngp$ ./build/testbed --scene data/sdf/armadillo.obj
instant-ngp$ ./build/testbed --scene data/image/albert.exr
To reproduce the gigapixel results, download, for example, the Tokyo image and convert it to .bin
using the scripts/convert_image.py
script. This custom format improves compatibility and loading speed when resolution is high. Now you can run:
instant-ngp$ ./build/testbed --scene data/image/tokyo.bin
Download the nanovdb volume for the Disney cloud, which is derived from here (CC BY-SA 3.0).
instant-ngp$ ./build/testbed --mode volume --scene data/volume/wdas_cloud_quarter.nvdb
To conduct controlled experiments in an automated fashion, all features from the interactive testbed (and more!) have Python bindings that can be easily instrumented.
For an example of how the ./build/testbed
application can be implemented and extended from within Python, see ./scripts/run.py
, which supports a superset of the command line arguments that ./build/testbed
does.
If you'd rather build new models from the hash encoding and fast neural networks, consider the tiny-cuda-nn's PyTorch extension.
Happy hacking!
Before compiliing the new bindings, make sure you're running python with pytorch (>=1.10.0) installed, then:
instant-ngp$ python setup.py install
Q: How can I run instant-ngp in headless mode?
A: Use ./build/testbed --no-gui
or python scripts/run.py
. You can also compile without GUI via cmake -DNGP_BUILD_WITH_GUI=off ...
Q: Does this codebase run on Google Colab?
A: Yes. See this example by user @myagues. Caveat: this codebase requires large amounts of GPU RAM and might not fit on your assigned GPU. It will also run slower on older GPUs.
Q: Is there a Docker container?
A: Yes. We bundle a Visual Studio Code development container, the .devcontainer/Dockerfile
of which you can also use stand-alone.
If you want to run the container without using VSCode:
docker-compose -f .devcontainer/docker-compose.yml build instant-ngp
xhost local:root
docker-compose -f .devcontainer/docker-compose.yml run instant-ngp /bin/bash
Then run the build commands above as normal.
Q: How can I edit and train the underlying hash encoding or neural network on a new task?
A: Use tiny-cuda-nn's PyTorch extension.
Q: How can I save the trained model and load it again later?
A: Two options:
load_snapshot
/ save_snapshot
(see scripts/run.py
for example usage).Q: Can this codebase use multiple GPUs at the same time?
A: No. To select a specific GPU to run on, use the CUDA_VISIBLE_DEVICES environment variable. To optimize the compilation for that specific GPU use the TCNN_CUDA_ARCHITECTURES environment variable.
Q: What is the coordinate system convention?
A: See this helpful diagram by user @jc211.
Q: The NeRF reconstruction of my custom dataset looks bad; what can I do?
A: There could be multiple issues:
aabb_scale
) might have been tuned suboptimally. We recommend starting with aabb_scale=16
and then decreasing it to 8
, 4
, 2
, and 1
until you get optimal quality.Q: Why are background colors randomized during NeRF training?
A: Transparency in the training data indicates a desire for transparency in the learned model. Using a solid background color, the model can minimize its loss by simply predicting that background color, rather than transparency (zero density). By randomizing the background colors, the model is forced to learn zero density to let the randomized colors "shine through".
Q: How to mask away NeRF training pixels (e.g. for dynamic object removal)?
A: For any training image xyz.*
with dynamic objects, you can provide a dynamic_mask_xyz.png
in the same folder. This file must be in PNG format, where non-zero pixel values indicate masked-away regions.
Before investigating further, make sure all submodules are up-to-date and try compiling again.
instant-ngp$ git submodule sync --recursive
instant-ngp$ git submodule update --init --recursive
If instant-ngp still fails to compile, update CUDA as well as your compiler to the latest versions you can install on your system. It is crucial that you update both, as newer CUDA versions are not always compatible with earlier compilers and vice versa. If your problem persists, consult the following table of known issues.
*After each step, delete the build
folder and let CMake regenerate it before trying again.*
Problem | Resolution |
---|---|
CMake error: No CUDA toolset found / CUDA_ARCHITECTURES is empty for target "cmTC_0c70f" | Windows: the Visual Studio CUDA integration was not installed correctly. Follow these instructions to fix the problem without re-installing CUDA. (#18) |
Linux: Environment variables for your CUDA installation are probably incorrectly set. You may work around the issue using cmake . -B build -DCMAKE_CUDA_COMPILER=/usr/local/cuda-<your cuda version>/bin/nvcc (#28) |
|
CMake error: No known features for CXX compiler "MSVC" | Reinstall Visual Studio & make sure you run CMake from a developer shell. (#21) |
Compile error: A single input file is required for a non-link phase when an outputfile is specified | Ensure there no spaces in the path to instant-ngp. Some build systems seem to have trouble with those. (#39 #198) |
Compile error: undefined references to "cudaGraphExecUpdate" / identifier "cublasSetWorkspace" is undefined | Update your CUDA installation (which is likely 11.0) to 11.3 or higher. (#34 #41 #42) |
Compile error: too few arguments in function call | Update submodules with the above two git commands. (#37 #52) |
Python error: No module named 'pyngp' | It is likely that CMake did not detect your Python installation and therefore did not build pyngp . Check CMake logs to verify this. If pyngp was built in a different folder than instant-ngp/build , Python will be unable to detect it and you have to supply the full path to the import statement. (#43) |
If you cannot find your problem in the table, please feel free to open an issue and ask for help.
Many thanks to Jonathan Tremblay and Andrew Tao for testing early versions of this codebase and to Arman Toorians and Saurabh Jain for the factory robot dataset. We also thank Andrew Webb for noticing that one of the prime numbers in the spatial hash was not actually prime; this has been fixed since.
This project makes use of a number of awesome open source libraries, including:
dependencies
folder.Many thanks to the authors of these brilliant projects!
@article{mueller2022instant,
author = {Thomas M\"uller and Alex Evans and Christoph Schied and Alexander Keller},
title = {Instant Neural Graphics Primitives with a Multiresolution Hash Encoding},
journal = {ACM Trans. Graph.},
issue_date = {July 2022},
volume = {41},
number = {4},
month = jul,
year = {2022},
pages = {102:1--102:15},
articleno = {102},
numpages = {15},
url = {https://doi.org/10.1145/3528223.3530127},
doi = {10.1145/3528223.3530127},
publisher = {ACM},
address = {New York, NY, USA},
}
Copyright © 2022, NVIDIA Corporation. All rights reserved.
This work is made available under the Nvidia Source Code License-NC. Click here to view a copy of this license.