Paper | Project page | Video
This is an official PyTorch code repository of the paper "Point-Based Modeling of Human Clothing" (accepted to ICCV, 2021).
git clone https://github.com/izakharkin/point_based_clothing.git
cd point_based_clothing
git submodule init && git submodule update
sudo groupadd docker
sudo usermod -aG docker $USER
10_nvidia.json
and place it in the docker/
foldersource activate pbc
Downloads
section, download SMPLIFY_CODE_V2.ZIP
, and unpack it;smplify_public/code/models/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl
to data/smpl_models/SMPL_NEUTRAL.pkl
.checkpoints/
folder;psp/
folder to the samples/
folder.To run our pipeline on custom data (images or videos):
We recommend to run these methods on internet_images/
test dataset to make sure that your outputs exactly match the format inside internet_images/segmentations/cloth
and internet_images/smpl/results
.
We provide scripts for geometry fitting and inference and appearance fitting and inference.
To fit a style outfit code to a single image one can run:
python fit_outfit_code.py --config_name=outfit_code/psp
The learned outfit codes are saved to out/outfit_code/outfit_codes_<dset_name>.pkl
by default. The visualization of the process is in out/outfit_code/vis_<dset_name>/
:
Note: visibility_thr
hyperparameter in fit_outfit_code.py
may affect the quality of result point cloud (e.f. make it more sparse). Feel free to tune it if the result seems not perfect.
To further infer the fitted outfit style on the train or on new subjects please see infer_outfit_code.ipynb
. To run jupyter notebook server from the docker, run this inside the container:
jupyter notebook --ip=0.0.0.0 --port=8087 --no-browser
To fit a clothing appearance to a sequence of frames one can run:
python fit_appearance.py --config_name=appearance/psp_male-3-casual
The learned neural descriptors ntex0_<epoch>.pth
and neural rendering network weights model0_<epoch>.pth
are saved to out/appearance/<dset_name>/<subject_id>/<experiment_dir>/checkpoints/
by default. The visualization of the process is in out/appearance/<dset_name>/<subject_id>/<experiment_dir>/visuals/
.
To further infer the fitted clothing point cloud and its appearance on the train or on new subjects please see infer_appearance.ipynb
. To run jupyter notebook server from the docker, run this inside the container:
jupyter notebook --ip=0.0.0.0 --port=8087 --no-browser
Question:
Trying to obtain the final point cloud generated during the outfit_coding module. Is there a way to save the 3D point clouds used to generate the output images/videos when running fit outfit code?
Answer:
There is no implemented function for that out-of-the-box, but one can access the point clouds themselves via working with data dicts:
- during outfit code fitting, you could start by saving
cloth_pcd
to a file and see if it is in the format you need. This should be a point cloud predicted by a draping network from a currentoutfit_code
.- during inference, you could start right from the notebook, the third code cell. You can access the point cloud of the clothing by just returning
cloth_pcd
frominfer_pid()
function.The actual place where the draping network predicts the clothing point cloud from an
outfit_code
is only one - it is here inside theforward_pass()
function.
If you find our work helpful, please do not hesitate to cite us:
@InProceedings{Zakharkin_2021_ICCV,
author = {Zakharkin, Ilya and Mazur, Kirill and Grigorev, Artur and Lempitsky, Victor},
title = {Point-Based Modeling of Human Clothing},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {14718-14727}
}
Non-commercial use only.
We also thank the authors of Cloth3D and PeopleSnapshot datasets.