This work presents NeRF-US, a method to train NeRFs in-the-wild for sound fields like ultrasound imaging data. Check out our website to view some results of this work.
This codebase is forked from the awesome Ultra-NeRF and Nerfbusters repository.
pip install nerfus
or you could also install the package from source:
git clone https://github.com/Rishit-Dagli/nerf-us
cd nerf-us
pip install -e .
virtualenv
you could run:pip install -r requirements.txt
If you use conda
you could run:
conda env create -f environment.yml
conda activate nerfus
We also use the branch nerfbusters-changes. You may have to run the viewer locally if you want full functionality.
cd path/to/nerfstudio
pip install -e .
pip install torch==1.13.1 torchvision functorch --extra-index-url https://download.pytorch.org/whl/cu117
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
binvox
to voxelize cubesmkdir bins
cd bins
wget -O binvox https://www.patrickmin.com/binvox/linux64/binvox?rnd=16811490753710
cd ../
chmod +x bins/binvox
Check out the Tips section for tips on installing the requirements.
For data preparation, the cubes
directory contains modules for processing 3D data, including dataset handling (datasets3D.py
), rendering (render.py
), and visualization (visualize3D.py
). The data_modules
directory further supports data management with modules for 3D cubes and a general datamodule for the diffusion model.
The diffusion model is primarily implemented in the models
directory, which includes the core model definition (model.py
), U-Net architecture (unet.py
), and related utilities. The lightning
directory contains the training logic for the diffusion model, including loss functions (dsds_loss.py
) and the trainer module (nerfus_trainer.py
). The NeRF component is housed in the nerf
directory, which includes experiment configurations, utility functions, and the main pipeline for NeRF-US (nerfus_pipeline.py
).
.
├── config (configuration files for the datasets and models)
│ ├── shapenet.yaml (configuration file for the shapenet dataset)
│ └── synthetic-knee.yaml (configuration file for the diffusion model)
├── environment.yml (conda environment file)
├── nerfus (main codebase)
│ ├── bins
│ │ └── binvox (binvox executable)
│ ├── cubes (making cubes from 3D data)
│ │ ├── __init__.py
│ │ ├── binvox_rw.py
│ │ ├── datasets3D.py
│ │ ├── render.py
│ │ ├── utils.py
│ │ └── visualize3D.py
│ ├── data
│ ├── data_modules (data modules for cubes)
│ │ ├── __init__.py
│ │ ├── cubes3d.py
│ │ └── datamodule.py (data module for diffusion model)
│ ├── download_nerfus_dataset.py (script to download the diffusion model dataset)
│ ├── lightning (training lightning modules for diffusion model)
│ │ ├── __init__.py
│ │ ├── dsds_loss.py (loss for diffusion model)
│ │ └── nerfus_trainer.py (training code for diffusion model)
│ ├── models (model definition for diffusion model)
│ │ ├── __init__.py
│ │ ├── fp16_util.py
│ │ ├── model.py
│ │ ├── nn.py
│ │ └── unet.py
│ ├── nerf (main codebase for the NeRF)
│ │ ├── experiment_configs (configurations for the Nerfacto experiments)
│ │ │ ├── __init__.py
│ │ │ ├── nerfacto_experiments.py
│ │ │ └── utils.py
│ │ ├── nerfbusters_utils.py (utils for nerfbusters)
│ │ ├── nerfus_config.py (nerfstudio method configurations for the NeRF-US)
│ │ └── nerfus_pipeline.py (pipeline for NeRF-US)
│ ├── run.py (training script for diffusion model)
│ └── utils (utility functions for the NeRF training)
│ ├── __init__.py
│ ├── metrics.py
│ ├── utils.py
│ └── visualizations.py
└── requirements.txt (requirements file we use)
First, download either the synthetic knee cubes or the synthetic phantom cubes dataset:
.
├── config
│ ├── shapenet.yaml
│ └── synthetic-knee.yaml
├── nerfus
│ ├── bins
│ │ └── binvox
│ ├── data
│ | ├── syn-knee
| | └── syn-spi
We can now train the 3D diffusion model using the following command:
python nerfus/run.py --config config/synthetic-knee.yaml --name synthetic-knee-experiment --pt
This also automatically downloads Nerfbusters checkpoint on which we run adaptation.
Contrary to many other NeRF + Diffusion models we do not first train a NeRF and then continue training with the diffusion model as regularizee. Instead, we train we train it with the diffusion model from scratch.
We run the training using our method using Nerfstudio commands:
ns-train nerfus --data path/to/data nerfstudio-data --eval-mode train-split-fraction
For our baselines, and experiments we directly use the Nerfstudio commands to train on the 10 individual datasets. For our ablation study, we do 3 ablations:
lambda
to 0 (this could easily be made faster)lambda
to 0 (this could easily be made faster)We can use any other Nerfstudio commands as well. For instance, rendering across a path:
ns-render --load-config path/to/config.yml --traj filename --camera-path-filename path/to/camera-path.json --output-path renders/my-render.mp4
or computing metrics:
ns-render --load-config path/to/config.yml --output-path path/to/output
We share some tips on running the code and reproducing our results.
open3d
, and tiny-cuda-nn
before installing Nerfstudio separately and install it from source. We also recommend building these packages on the same GPU you plan to run it on.requirements.txt
file to install the required packages. While, we provide a conda environment.yml
(especially due to some Nerfstudio problems people might face), we have not tested this but expect it to work.This code base is built on top of, and thanks to them for maintaining the repositories:
If you find NeRF-US helpful, please consider citing:
@misc{dagli2024nerfusremovingultrasoundimaging,
title={NeRF-US: Removing Ultrasound Imaging Artifacts from Neural Radiance Fields in the Wild},
author={Rishit Dagli and Atsuhiro Hibi and Rahul G. Krishnan and Pascal N. Tyrrell},
year={2024},
eprint={2408.10258},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.10258},
}