SamsungLabs / point_based_clothing

Official PyTorch implementation of ICCV'21 paper Point-Based Modeling of Human Clothing
https://www.ilia.ai/research/point-based-clothing
MIT License
41 stars 6 forks source link
3d-graphics clothing-modeling cloud-transformers deep-learning garment neural-rendering point-based-clothing virtual-try-on vton

Point-Based Modeling of Human Clothing

Paper | Project page | Video

This is an official PyTorch code repository of the paper "Point-Based Modeling of Human Clothing" (accepted to ICCV, 2021).

Setup

Build docker

Download data

Custom data

To run our pipeline on custom data (images or videos):

We recommend to run these methods on internet_images/ test dataset to make sure that your outputs exactly match the format inside internet_images/segmentations/cloth and internet_images/smpl/results.

Run

We provide scripts for geometry fitting and inference and appearance fitting and inference.

Geometry (outfit code)

Fitting

To fit a style outfit code to a single image one can run:

python fit_outfit_code.py --config_name=outfit_code/psp

The learned outfit codes are saved to out/outfit_code/outfit_codes_<dset_name>.pkl by default. The visualization of the process is in out/outfit_code/vis_<dset_name>/:

outfit_code_fitting_coarse

outfit_code_fitting_fine

Note: visibility_thr hyperparameter in fit_outfit_code.py may affect the quality of result point cloud (e.f. make it more sparse). Feel free to tune it if the result seems not perfect.

vis_thr_360

Inference

outfit_code_inference

To further infer the fitted outfit style on the train or on new subjects please see infer_outfit_code.ipynb. To run jupyter notebook server from the docker, run this inside the container:

jupyter notebook --ip=0.0.0.0 --port=8087 --no-browser 

Appearance (neural descriptors)

Fitting

To fit a clothing appearance to a sequence of frames one can run:

python fit_appearance.py --config_name=appearance/psp_male-3-casual

The learned neural descriptors ntex0_<epoch>.pth and neural rendering network weights model0_<epoch>.pth are saved to out/appearance/<dset_name>/<subject_id>/<experiment_dir>/checkpoints/ by default. The visualization of the process is in out/appearance/<dset_name>/<subject_id>/<experiment_dir>/visuals/.

Inference

appearance_inference

To further infer the fitted clothing point cloud and its appearance on the train or on new subjects please see infer_appearance.ipynb. To run jupyter notebook server from the docker, run this inside the container:

jupyter notebook --ip=0.0.0.0 --port=8087 --no-browser 

Q&A

Question:

Trying to obtain the final point cloud generated during the outfit_coding module. Is there a way to save the 3D point clouds used to generate the output images/videos when running fit outfit code?

Answer:

There is no implemented function for that out-of-the-box, but one can access the point clouds themselves via working with data dicts:

  • during outfit code fitting, you could start by saving cloth_pcd to a file and see if it is in the format you need. This should be a point cloud predicted by a draping network from a current outfit_code.
  • during inference, you could start right from the notebook, the third code cell. You can access the point cloud of the clothing by just returning cloth_pcd from infer_pid() function.

The actual place where the draping network predicts the clothing point cloud from an outfit_code is only one - it is here inside the forward_pass() function.

Citation

If you find our work helpful, please do not hesitate to cite us:

@InProceedings{Zakharkin_2021_ICCV,
    author    = {Zakharkin, Ilya and Mazur, Kirill and Grigorev, Artur and Lempitsky, Victor},
    title     = {Point-Based Modeling of Human Clothing},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {14718-14727}
}

Non-commercial use only.

Related projects

We also thank the authors of Cloth3D and PeopleSnapshot datasets.