Rishit-dagli / NeRF-US

Official code for NeRF-US: Removing Ultrasound Imaging Artifacts from Neural Radiance Fields in the Wild
https://rishitdagli.com/nerf-us
Apache License 2.0
9 stars 0 forks source link
3d-reconstruction computer-vision nerf nerf-us novel-view-synthesis

NeRF-US👥: Removing Ultrasound Imaging Artifacts from Neural Radiance Fields in the Wild

Rishit Dagli1,2, Atsuhiro Hibi2,3,4, Rahul G. Krishnan1,5, Pascal Tyrrell2,4,6 Departments of 1 Computer Science; 2 Medical Imaging, University of Toronto, Canada
3 Division of Neurosurgery, St Michael's Hospital, Unity Health Toronto, Canada
4 Institute of Medical Science; Departments of 5 Laboratory Medicine and Pathobiology; 6 Statistical Sciences, University of Toronto, Canada Twitter Paper PDF Project Page

This work presents NeRF-US, a method to train NeRFs in-the-wild for sound fields like ultrasound imaging data. Check out our website to view some results of this work.

This codebase is forked from the awesome Ultra-NeRF and Nerfbusters repository.

Installation

  1. First, install the pip package by running:
pip install nerfus

or you could also install the package from source:

git clone https://github.com/Rishit-Dagli/nerf-us
cd nerf-us
pip install -e .
  1. Now install the dependencies, if you use the virtualenv you could run:
pip install -r requirements.txt

If you use conda you could run:

conda env create -f environment.yml
conda activate nerfus
  1. Install Nerfstudio and dependencies. Installation guide can be found install nerfstudio

We also use the branch nerfbusters-changes. You may have to run the viewer locally if you want full functionality.

cd path/to/nerfstudio
pip install -e .
pip install torch==1.13.1 torchvision functorch --extra-index-url https://download.pytorch.org/whl/cu117
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
  1. Install binvox to voxelize cubes
mkdir bins
cd bins
wget -O binvox https://www.patrickmin.com/binvox/linux64/binvox?rnd=16811490753710
cd ../
chmod +x bins/binvox

Check out the Tips section for tips on installing the requirements.

Overview of Codebase

For data preparation, the cubes directory contains modules for processing 3D data, including dataset handling (datasets3D.py), rendering (render.py), and visualization (visualize3D.py). The data_modules directory further supports data management with modules for 3D cubes and a general datamodule for the diffusion model.

The diffusion model is primarily implemented in the models directory, which includes the core model definition (model.py), U-Net architecture (unet.py), and related utilities. The lightning directory contains the training logic for the diffusion model, including loss functions (dsds_loss.py) and the trainer module (nerfus_trainer.py). The NeRF component is housed in the nerf directory, which includes experiment configurations, utility functions, and the main pipeline for NeRF-US (nerfus_pipeline.py).

.
├── config (configuration files for the datasets and models)
│   ├── shapenet.yaml (configuration file for the shapenet dataset)
│   └── synthetic-knee.yaml (configuration file for the diffusion model)
├── environment.yml (conda environment file)
├── nerfus (main codebase)
│   ├── bins
│   │   └── binvox (binvox executable)
│   ├── cubes (making cubes from 3D data)
│   │   ├── __init__.py
│   │   ├── binvox_rw.py
│   │   ├── datasets3D.py
│   │   ├── render.py
│   │   ├── utils.py
│   │   └── visualize3D.py
│   ├── data
│   ├── data_modules (data modules for cubes)
│   │   ├── __init__.py
│   │   ├── cubes3d.py
│   │   └── datamodule.py (data module for diffusion model)
│   ├── download_nerfus_dataset.py (script to download the diffusion model dataset)
│   ├── lightning (training lightning modules for diffusion model)
│   │   ├── __init__.py
│   │   ├── dsds_loss.py (loss for diffusion model)
│   │   └── nerfus_trainer.py (training code for diffusion model)
│   ├── models (model definition for diffusion model)
│   │   ├── __init__.py
│   │   ├── fp16_util.py
│   │   ├── model.py
│   │   ├── nn.py
│   │   └── unet.py
│   ├── nerf (main codebase for the NeRF)
│   │   ├── experiment_configs (configurations for the Nerfacto experiments)
│   │   │   ├── __init__.py
│   │   │   ├── nerfacto_experiments.py
│   │   │   └── utils.py
│   │   ├── nerfbusters_utils.py (utils for nerfbusters)
│   │   ├── nerfus_config.py (nerfstudio method configurations for the NeRF-US)
│   │   └── nerfus_pipeline.py (pipeline for NeRF-US)
│   ├── run.py (training script for diffusion model)
│   └── utils (utility functions for the NeRF training)
│       ├── __init__.py
│       ├── metrics.py
│       ├── utils.py
│       └── visualizations.py
└── requirements.txt (requirements file we use)

Usage

Training the Diffusion Model

First, download either the synthetic knee cubes or the synthetic phantom cubes dataset:

.
├── config
│   ├── shapenet.yaml
│   └── synthetic-knee.yaml
├── nerfus
│   ├── bins
│   │   └── binvox
│   ├── data
│   |   ├── syn-knee
|   |   └── syn-spi

We can now train the 3D diffusion model using the following command:

python nerfus/run.py --config config/synthetic-knee.yaml --name synthetic-knee-experiment --pt

This also automatically downloads Nerfbusters checkpoint on which we run adaptation.

Training the NeRF

Contrary to many other NeRF + Diffusion models we do not first train a NeRF and then continue training with the diffusion model as regularizee. Instead, we train we train it with the diffusion model from scratch.

We run the training using our method using Nerfstudio commands:

ns-train nerfus --data path/to/data nerfstudio-data --eval-mode train-split-fraction

For our baselines, and experiments we directly use the Nerfstudio commands to train on the 10 individual datasets. For our ablation study, we do 3 ablations:

  1. for training without the border probability we just set the corresponding lambda to 0 (this could easily be made faster)
  2. for training without the scaterring density we just set the corresponding lambda to 0 (this could easily be made faster)
  3. for training without ultrasound rendering, we just use standard nerstudio commands

We can use any other Nerfstudio commands as well. For instance, rendering across a path:

ns-render --load-config path/to/config.yml  --traj filename --camera-path-filename path/to/camera-path.json --output-path renders/my-render.mp4

or computing metrics:

ns-render --load-config path/to/config.yml --output-path path/to/output

Tips

We share some tips on running the code and reproducing our results.

on installing required packages

on compute

Credits

This code base is built on top of, and thanks to them for maintaining the repositories:

Citation

If you find NeRF-US helpful, please consider citing:

@misc{dagli2024nerfusremovingultrasoundimaging,
      title={NeRF-US: Removing Ultrasound Imaging Artifacts from Neural Radiance Fields in the Wild}, 
      author={Rishit Dagli and Atsuhiro Hibi and Rahul G. Krishnan and Pascal N. Tyrrell},
      year={2024},
      eprint={2408.10258},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2408.10258}, 
}