This is a very fast offline fitting framework, which uses only landmarks. Currently commonly used 3DMM models: BFM, FaceVerse and FLAME are supported.
More results are in "gifs/".
conda env create -f environment.yaml
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1120/download.html
Make the dataset according to the following directory structure:
data_root
│ └── images
│ │ └── {frame_0}
| | | └── image_{camera_id_0}.jpg
| | | └── image_{camera_id_1}.jpg
| | | └── ...
│ │ └── {frame_1}
| | | └── image_{camera_id_0}.jpg
| | | └── image_{camera_id_1}.jpg
| | | └── ...
| | └── ...
│ └── cameras
│ │ └── {frame_0}
| | | └── camera_{camera_id_0}.npz
| | | └── camera_{camera_id_1}.npz
| | | └── ...
│ │ └── {frame_1}
| | | └── camera_{camera_id_0}.npz
| | | └── camera_{camera_id_1}.npz
| | | └── ...
| | └── ...
I provide a script "preprocess/preprocess_monocular_video.py" for preprocessing NeRSemble dataset to such a structured dataset. Please apply to download it and put it into "path/to/raw_NeRSemble/".
cd preprocess
python preprocess_nersemble.py
Besides, I also provide a script "preprocess/preprocess_monocular_video.py" for converting a monocular video.
cd preprocess
python preprocess_monocular_video.py
First, edit the config file, for example "config/NeRSemble_031.yaml". Second, detect 2D landmarks for all the input images.
python detect_landmarks.py --config config/NeRSemble_031.yaml
Third, fit 3DMM model.
python fitting.py --config config/NeRSemble_031.yaml
Part of the code is borrowed from FLAME_PyTorch.