3d-motion-magnification / 3d-motion-mag

BSD 3-Clause "New" or "Revised" License
23 stars 5 forks source link

3D Motion Magnification: Visualizing Subtle Motions with Time-Varying Neural Fields

This repo contains implementation of the method described in the paper, with demonstrations on both synthetic multi-view data generated in Blender and handheld monocular video captured in the real world.

Project page

Preparing environment

conda create -n magnerf python=3.11
conda activate magnerf
conda install pytorch torchvision cudatoolkit=11.8 -c pytorch -c nvidia

cd magnerf/datasets
git clone https://github.com/trueprice/pycolmap.git
cd ../..

git clone https://github.com/LabForComputationalVision/plenoptic.git
cd plenoptic
pip install -e .

Blender multi-view scenes

Link to data: Google Drive. Blender multi-view scenes are under data/synthetic.

See scripts/synthetic/run_blender.sh for commands to begin training. Set appropriate data_dirs and expname (scene folder name) in the config file under magnerf/configs.

See scripts/synthetic/mag_blender.sh for commands to generate magnified rendering after training. The output will be saved under logs/blender/$expname/output/render.

Handheld monocular video captures

Link to data: Google Drive. Handheld scenes are under data/handheld.

COLMAP is used to pre-process the data (in the images folder) and generate camera poses (in the sparse folder).

See the example of scripts/handheld/run_baby.sh for commands to begin training. Set appropriate data_dirs and expname (scene folder name) in the config file under magnerf/configs.

See the example of scripts/handheld/mag_baby.sh for commands to generate magnified rendering after training. The output will be saved under logs/handheld/baby/output/render.

Handheld + tripod video captures

We provide some captures where the handheld, freely-moving part of the video is used to reconstruct 3D NeRF, while the motion is magnified from the stablized part of the video captured on a tripod. Link to data: Google Drive. Tripod scenes are under data/tripod, with example scripts under scripts/tripod.

Notes

Citation

@inproceedings{feng2023motionmag,
    author    = {Feng, Brandon Y. and AlZayer, Hadi and Rubinstein, Michael and Freeman, William T. and Huang, Jia-Bin},
    title     = {Visualizing Subtle Motions with Time-Varying Neural Fields},
    booktitle = {International Conference on Computer Vision (ICCV)},
    year      = {2023},
}

The code base is adapted from K-Planes: Explicit Radiance Fields in Space, Time, and Appearance and redistributed under the BSD 3-clause license.