Vincent Sitzmann*,
Julien N. P. Martel*,
Alexander W. Bergman,
David B. Lindell,
Gordon Wetzstein
Stanford University, *denotes equal contribution
This is the official implementation of the paper "Implicit Neural Representations with Periodic Activation Functions".
If you want to experiment with Siren, we have written a Colab. It's quite comprehensive and comes with a no-frills, drop-in implementation of SIREN. It doesn't require installing anything, and goes through the following experiments / SIREN properties:
You can also play arond with a tiny SIREN interactively, directly in the browser, via the Tensorflow Playground here. Thanks to David Cato for implementing this!
If you want to reproduce all the results (including the baselines) shown in the paper, the videos, point clouds, and audio files can be found here.
You can then set up a conda environment with all dependencies like so:
conda env create -f environment.yml
conda activate siren
The code is organized as follows:
The directory experiment_scripts
contains one script per experiment in the paper.
To monitor progress, the training code writes tensorboard summaries into a "summaries"" subdirectory in the logging_root.
The image experiment can be reproduced with
python experiment_scripts/train_img.py --model_type=sine
The figures in the paper were made by extracting images from the tensorboard summaries. Example code how to do this can be found in the make_figures.py script.
This github repository comes with both the "counting" and "bach" audio clips under ./data.
They can be trained with
python experiment_scipts/train_audio.py --model_type=sine --wav_path=<path_to_audio_file>
The "bikes" video sequence comes with scikit-video and need not be downloaded. The cat video can be downloaded with the link above.
To fit a model to a video, run
python experiment_scipts/train_video.py --model_type=sine --experiment_name bikes_video
For the poisson experiments, there are three separate scripts: One for reconstructing an image from its gradients (train_poisson_grad_img.py), from its laplacian (train_poisson_lapl_image.py), and to combine two images (train_poisson_gradcomp_img.py).
Some of the experiments were run using the BSD500 datast, which you can download here.
To fit a Signed Distance Function (SDF) with SIREN, you first need a pointcloud in .xyz format that includes surface normals. If you only have a mesh / ply file, this can be accomplished with the open-source tool Meshlab.
To reproduce our results, we provide both models of the Thai Statue from the 3D Stanford model repository and the living room used in our paper for download here.
To start training a SIREN, run:
python experiments_scripts/train_single_sdf.py --model_type=sine --point_cloud_path=<path_to_the_model_in_xyz_format> --batch_size=250000 --experiment_name=experiment_1
This will regularly save checkpoints in the directory specified by the rootpath in the script, in a subdirectory "experiment_1". The batch_size is typically adjusted to fit in the entire memory of your GPU. Our experiments show that with a 256, 3 hidden layer SIREN one can set the batch size between 230-250'000 for a NVidia GPU with 12GB memory.
To inspect a SDF fitted to a 3D point cloud, we now need to create a mesh from the zero-level set of the SDF. This is performed with another script that uses a marching cubes algorithm (adapted from the DeepSDF github repo) and creates the mesh saved in a .ply file format. It can be called with:
python experiments_scripts/test_single_sdf.py --checkpoint_path=<path_to_the_checkpoint_of_the_trained_model> --experiment_name=experiment_1_rec
This will save the .ply file as "reconstruction.ply" in "experiment_1_rec" (be patient, the marching cube meshing step takes some time ;) ) In the event the machine you use for the reconstruction does not have enough RAM, running test_sdf script will likely freeze. If this is the case, please use the option --resolution=512 in the command line above (set to 1600 by default) that will reconstruct the mesh at a lower spatial resolution.
The .ply file can be visualized using a software such as Meshlab (a cross-platform visualizer and editor for 3D models).
The helmholtz and wave equation experiments can be reproduced with the train_wave_equation.py and train_helmholtz.py scripts.
We're using the excellent torchmeta to implement hypernetworks. We realized that there is a technical report, which we forgot to cite - it'll make it into the camera-ready version!
If you find our work useful in your research, please cite:
@inproceedings{sitzmann2019siren,
author = {Sitzmann, Vincent
and Martel, Julien N.P.
and Bergman, Alexander W.
and Lindell, David B.
and Wetzstein, Gordon},
title = {Implicit Neural Representations
with Periodic Activation Functions},
booktitle = {arXiv},
year={2020}
}
If you have any questions, please feel free to email the authors.