taconite / IntrinsicAvatar

[CVPR 2024] IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing
https://neuralbodies.github.io/IntrinsicAvatar/
MIT License
66 stars 4 forks source link

IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing

Paper | Project Page

This repository contains the implementation of our paper IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing.

You can find detailed usage instructions for installation, dataset preparation, training and testing below.

If you find our code useful, please cite:

@inproceedings{WangCVPR2024,
  title   = {IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing},
  author  = {Shaofei Wang and Bo\v{z}idar Anti\'{c} and Andreas Geiger and Siyu Tang},
  booktitle = {IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
  year    = {2024}
}

Requirements

Install

Code and SMPL Setup

Environment Setup

Dataset Preparation

Please follow the steps in DATASET.md.

Training

Training and validation use wandb for logging, which is free to use but requires online register. If you don't want to use it, append logger.offline=true to your command.

To train on the male-3-casual sequence of PeopleSnapshot, use the following command:

python launch.py dataset=peoplesnapshot/male-3-casual tag=IA-male-3-casual

Checkpoints, code snapshot, and visualizations will be saved under the directory exp/intrinsic-avatar-male-3-casual/male-3-casual@YYYYMMDD-HHMMSS

Testing

To test on the male-3-casual sequence for relighting on within-distribution poses, use the following command:

python launch.py mode=test \
    resume=${PATH_TO_CKPT} \
    dataset=peoplesnapshot/male-3-casual \
    dataset.hdri_filepath=hdri_images/city.hdr \
    light=envlight_tensor \
    model.render_mode=light \ # light importance sampling
    model.global_illumination=false \
    model.samples_per_pixel=1024 \
    model.resample_light=false \ # set to true if you are doing quantitative evaluation
    tag=IA-male-3-casual \
    model.add_emitter=true  # set to false if you are doing quantitative evaluation

To test on the male-3-casual sequence for relighting on out-of-distribution poses, use the following command:

python launch.py mode=test \
    resume=${PATH_TO_CKPT} \
    dataset=animation/male-3-casual \
    dataset.hdri_filepath=hdri_images/city.hdr \
    light=envlight_tensor \
    model.render_mode=light \
    model.global_illumination=false \
    model.samples_per_pixel=1024 \
    model.resample_light=false \
    tag=IA-male-3-casual \
    model.add_emitter=true

NOTE: if you encounter the error mismatched input '=' expecting <EOF>, it is most likely because your checkpoint path contains = (which is the default checkpoint format of this repo). In such a case you can quote twice, e.g. use 'resume="${PATH_TO_CKPT}"'. For details please check this Hydra issue.

TODO

Acknowledgement

Our code structure is based on instant-nsr-pl. The importance sampling code (lib/nerfacc) follows the structure of NeRFAcc. The SMPL mesh visualization code (utils/smpl_renderer.py) is borrowed from NeuralBody. The LBS-based deformer code (models/deformers/fast-snarf) is borrowed from Fast-SNARF and InstantAvatar. We thank authors of these papers for their wonderful works which greatly facilitates the development of our project.