Shijie Zhou, Haoran Chang, Sicheng Jiang, Zhiwen Fan, Zehao Zhu, Dejia Xu, Pradyumna Chari, Suya You, Zhangyang Wang, Achuta Kadambi (* indicates equal contribution)
| Webpage | Full Paper | Video | Viewer Pre-built for Windows
Abstract: 3D scene representations have gained immense popularity in recent years. Methods that use Neural Radiance fields are versatile for traditional tasks such as novel view synthesis. In recent times, some work has emerged that aims to extend the functionality of NeRF beyond view synthesis, for semantically aware tasks such as editing and segmentation using 3D feature field distillation from 2D foundation models. However, these methods have two major limitations: (a) they are limited by the rendering speed of NeRF pipelines, and (b) implicitly represented feature fields suffer from continuity artifacts reducing feature quality. Recently, 3D Gaussian Splatting has shown state-of-the-art performance on real-time radiance field rendering. In this work, we go one step further: in addition to radiance field rendering, we enable 3D Gaussian splatting on arbitrary-dimension semantic features via 2D foundation model distillation. This translation is not straightforward: naively incorporating feature fields in the 3DGS framework encounters significant challenges, notably the disparities in spatial resolution and channel consistency between RGB images and feature maps. We propose architectural and training changes to efficiently avert this problem. Our proposed method is general, and our experiments showcase novel view semantic segmentation, language-guided editing and segment anything through learning feature fields from state-of-the-art 2D foundation models such as SAM and CLIP-LSeg. Across experiments, our distillation method is able to provide comparable or better results, while being significantly faster to both train and render. Additionally, to the best of our knowledge, we are the first method to enable point and bounding-box prompting for radiance field manipulation, by leveraging the SAM model.
@inproceedings{zhou2024feature,
title={Feature 3dgs: Supercharging 3d gaussian splatting to enable distilled feature fields},
author={Zhou, Shijie and Chang, Haoran and Jiang, Sicheng and Fan, Zhiwen and Zhu, Zehao and Xu, Dejia and Chari, Pradyumna and You, Suya and Wang, Zhangyang and Kadambi, Achuta},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21676--21685},
year={2024}
}
Our default, provided install method is based on Conda package and environment management:
conda create --name feature_3dgs python=3.8
conda activate feature_3dgs
PyTorch (Please check your CUDA version, we used 11.8)
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu118
Required packages
pip install -r requirements.txt
Submodules
New: Our Parallel N-dimensional Gaussian Rasterizer now supports RGB, arbitrary N-dimensional Feature, and Depth rendering.
pip install submodules/diff-gaussian-rasterization-feature # Rasterizer for RGB, n-dim feature, depth
pip install submodules/simple-knn
We follow the same dataset logistics for 3D Gaussian Splatting. If you want to work with your own scene, put the images you want to use in a directory <location>/input
.
<location>
|---input
|---<image 0>
|---<image 1>
|---...
For rasterization, the camera models must be either a SIMPLE_PINHOLE or PINHOLE camera. We provide a converter script convert.py
, to extract undistorted images and SfM information from input images. Optionally, you can use ImageMagick to resize the undistorted images. This rescaling is similar to MipNeRF360, i.e., it creates images with 1/2, 1/4 and 1/8 the original resolution in corresponding folders. To use them, please first install a recent version of COLMAP (ideally CUDA-powered) and ImageMagick.
If you have COLMAP and ImageMagick on your system path, you can simply run
python convert.py -s <location> [--resize] #If not resizing, ImageMagick is not needed
Our COLMAP loaders expect the following dataset structure in the source path location:
<location>
|---images
| |---<image 0>
| |---<image 1>
| |---...
|---sparse
|---0
|---cameras.bin
|---images.bin
|---points3D.bin
Alternatively, you can use the optional parameters --colmap_executable
and --magick_executable
to point to the respective paths. Please note that on Windows, the executable should point to the COLMAP .bat
file that takes care of setting the execution environment. Once done, <location>
will contain the expected COLMAP data set structure with undistorted, resized input images, in addition to your original images and some temporary (distorted) data in the directory distorted
.
If you have your own COLMAP dataset without undistortion (e.g., using OPENCV
camera), you can try to just run the last part of the script: Put the images in input
and the COLMAP info in a subdirectory distorted
:
<location>
|---input
| |---<image 0>
| |---<image 1>
| |---...
|---distorted
|---database.db
|---sparse
|---0
|---...
Then run
python convert.py -s <location> --skip_matching [--resize] #If not resizing, ImageMagick is not needed
Download the LSeg model file demo_e200.ckpt
from the Google drive and place it under the folder: encoders/lseg_encoder
.
cd encoders/lseg_encoder
python -u encode_images.py --backbone clip_vitl16_384 --weights demo_e200.ckpt --widehead --no-scaleinv --outdir ../../data/DATASET_NAME/rgb_feature_langseg --test-rgb-dir ../../data/DATASET_NAME/images --workers 0
This may produces large feature map files in --outdir
(100-200MB per file).
Run train.py. If reconstruction fails, change --scale 4.0
to smaller or larger values, e.g., --scale 1.0
or --scale 16.0
.
The code requires python>=3.8
, as well as pytorch>=1.7
and torchvision>=0.8
. Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.
SAM setup:
cd encoders/sam_encoder
pip install -e .
Pretrain model download:
Click the links below to download the checkpoint for the corresponding model type.
default
or vit_h
: ViT-H SAM model.vit_l
: ViT-L SAM model.vit_b
: ViT-B SAM model.And place it under the folder: encoders/sam_encoder/checkpoints
Run the following to export the image embeddings of an input image or directory of images.
cd encoders/sam_encoder
python export_image_embeddings.py --checkpoint checkpoints/sam_vit_h_4b8939.pth --model-type vit_h --input ../../data/DATASET_NAME/images --output ../../data/OUTPUT_NAME/sam_embeddings
We are glad to introduce a brand new Multi-functional Interactive Viewer for the visualization of RGB, Depth, Edge, Normal, Curvature, and especially semantic feature. The Pre-built Viewer for Windows is placed in viewer_windows
and can also be downloaded here. If your OS is Ubuntu 22.04, you need to compile the viewer locally:
# Dependencies
sudo apt install -y libglew-dev libassimp-dev libboost-all-dev libgtk-3-dev libopencv-dev libglfw3-dev libavdevice-dev libavcodec-dev libeigen3-dev libxxf86vm-dev libembree-dev
# Project setup
cd SIBR_viewers
cmake -Bbuild . -DCMAKE_BUILD_TYPE=Release # add -G Ninja to build faster
cmake --build build -j24 --target install
You can visit GS Monitor for more details.
https://github.com/RongLiu-Leo/feature-3dgs/assets/102014841/7baf236f-29bc-4de1-9a99-97d528f6e63e
Firstly run the viewer,
./viewer_windows/bin/SIBR_remoteGaussian_app_rwdi # Windows
or
./<SIBR install dir>/bin/SIBR_remoteGaussian_app # Ubuntu 22.04
and then
If you want to monitor the training process, run train.py
, see Train section for more details.
If you prefer faster training, run view.py
to interact with your trained model once training is complete. See View the Trained Model section for more details.
python train.py -s data/DATASET_NAME -m output/OUTPUT_NAME -f lseg --speedup --iterations 7000
You can customize NUM_SEMANTIC_CHANNELS
in submodules/diff-gaussian-rasterization-feature/cuda_rasterizer/config.h
for any number of feature dimension that you want:
NUM_SEMANTIC_CHANNELS
in config.h
.If you would like to use the optional CNN speed-up module, do the following accordingly:
NUMBER
in semantic_feature_size/NUMBER
in scene/gaussian_model.py
in line 142.NUMBER
in feature_out_dim/NUMBER
in train.py
in line 51.NUMBER
in feature_out_dim/NUMBER
in render.py
in line 117 and 261. where feature_out_dim
/ NUMBER
= NUM_SEMANTIC_CHANNELS
. The feature_out_dim
matches the ground truth foundation model dimensions, 512 for LSeg and 256 for SAM. The default NUMBER = 4
. For your reference, here are 4 configurations of runing train.py
:
For langage-guided editing:
-f lseg
with NUM_SEMANTIC_CHANNELS
512
* (No speed-up for this task).
For segmentation tasks:
-f lseg --speedup
with NUM_SEMANTIC_CHANNELS
128
, NUMBER = 4
*.
-f sam
with NUM_SEMANTIC_CHANNELS
256
.
-f sam --speedup
with NUM_SEMANTIC_CHANNELS
64
, NUMBER = 4
*.
*: setup used in our experiments
Everytime after modifying any CUDA code, make sure to delete submodules/diff-gaussian-rasterization-feature/build
and compile again:
pip install submodules/diff-gaussian-rasterization-feature
After training, you can view your trained model directly while keep the viewer running by:
python view.py -s <path to COLMAP or NeRF Synthetic dataset> -m <path to trained model> -f lseg
Render from training and test views:
python render.py -s data/DATASET_NAME -m output/OUTPUT_NAME --iteration 3000
Render from novel views (add --novel_view
):
python render.py -s data/DATASET_NAME -m output/OUTPUT_NAME -f lseg --iteration 3000 --novel_view
(Add numbers after --num_views
to change number of views, e.g. --num_views 100
, default number is 200)
Render from novel views using multiple interpolations (add --novel_view
and --multi_interpolate
):
python render.py -s data/DATASET_NAME -m output/OUTPUT_NAME -f lseg --iteration 3000 --novel_view --multi_interpolate
python render.py -s data/DATASET_NAME -m output/OUTPUT_NAME -f lseg --iteration 3000 --edit_config configs/XXX.yaml
Run to create videos (add --fps
to change FPS, e.g. --fps 20
default is 10):
python videos.py --data output/OUTPUT_NAME --fps 10 -f lseg --iteration 10000
python -u segmentation.py --data ../../output/DATASET_NAME/ --iteration 6000
--label_src car,building,tree
):
python -u segmentation.py --data ../../output/DATASET_NAME/ --iteration 6000 --label_src car,building,tree
Calculate segmentaion metric (for Replica dataset experiment, our preprocessed data can be downloaded here):
cd encoders/lseg_encoder
python -u segmentation_metric.py --backbone clip_vitl16_384 --weights demo_e200.ckpt --widehead --no-scaleinv --student-feature-dir ../../output/OUTPUT_NAME/test/ours_30000/saved_feature/ --teacher-feature-dir ../../data/DATASET_NAME/rgb_feature_langseg/ --test-rgb-dir ../../output/OUTPUT_NAME/test/ours_30000/renders/ --workers 0 --eval-mode test
Run with following (add --image
to encode features from images):
--point 500 800
):
python segment_prompt.py --checkpoint checkpoints/sam_vit_h_4b8939.pth --model-type vit_h --data ../../output/OUTPUT_NAME --iteration 7000 --point 500 800
--box 100 100 1500 1200
):
python segment_prompt.py --checkpoint checkpoints/sam_vit_h_4b8939.pth --model-type vit_h --data ../../output/OUTPUT_NAME --iteration 7000 --box 100 100 1500 1200
--point 500 800
and --box 100 100 1500 1200
):
python segment_prompt.py --checkpoint checkpoints/sam_vit_h_4b8939.pth --model-type vit_h --data ../../output/OUTPUT_NAME --iteration 7000 --box 100 100 1500 1200 --point 500 800
(Add--onnx_path
to change onnx path)
Run with following (add --image
to encode features from images):
python segment.py --checkpoint checkpoints/sam_vit_h_4b8939.pth --model-type vit_h --data ../../output/OUTPUT_NAME --iteration 7000
Run with following (remove --feature_path
to encode features directly from images):
python segment_time.py --checkpoint checkpoints/sam_vit_h_4b8939.pth --model-type vit_h --image_path ../../output/OUTPUT_NAME/novel_views/ours_7000/renders/ --feature_path ../../output/OUTPUT_NAME/novel_views/ours_7000/saved_feature --output ../../output/OUTPUT_NAME
Our repo is developed based on 3D Gaussian Splatting, DFFs and Segment Anything. Many thanks to the authors for opensoucing the codebase.