pytorch implementation of the humanet, an auto-encoder, for human shape space with pose-independent, introduced:
Estimation of 3D Body Shape and Clothing Measurements from Frontal- and Side-view Images
Kundan Thota, Sungho Suh, Bo Zhou, Paul Lukowicz
Full paper
We recommend creating a new virtual environment for a clean installation of the dependencies. All following commands are assumed to be executed within this virtual environment. The code has been tested on Scientific Linux 7.9, python 3.10 and CUDA 10.1.
python3 -m venv humanenv
source humanenv/bin/activate
pip install -U pip setuptools
pip install -r requirements.txt
Download the SMPL body model (Note: use the version 1.0.0 with 10 shape PCs), and place the renamed {gender}_template.pkl
files for both genders and put them in ./smpl/
.
Download the SMPL body model as described above and our pre-trained demo model put the downloaded folder under the weights
folder:
humanet
├── CALVIS
| └── ...
├── data # Folder with preprocessed data
| └── ...
├── weights
| ├── feature_extractor_female_50.pth
│ ├── feature_extractor_male_50.pth
| ├── calvis_female_krr.pkl
│ ├── calvis_male_krr.pkl
├── smpl
| ├── female_template.pkl
│ ├── male_template.pkl
└── ...
Then steps to run for a demo:
Run the following snippet to initially create SMPL starter files.
python utils/preprocess_smpl.py --pickle /path/to/gender_pickle/file --gender male/female
Note: Please try to square crop humans perfectly to fit in the image without additional objects for the better results as shown in the paper. Run following command to resize the RGB images to 512x512 resolution images.
python utils/image_utils.py --front /path/to/front/image --side /path/to/side/image
please follow the following notebook to segment the images created from step-2.
Jupyter notebook with the example:
Run the following command to produce a demo. Note: Please input the images in same angles as used in the paper.
python demo.py --front_img /path/to/512x512/front/image/from/step-3 --side_img /path/to/512x512/side/image/from/step-3 --gender male/female \
--height (in meters) --weight (in kilos) --mesh_name /name/for/the/model.obj
It will generate clothing measurements and the 3D model.
Here we assume that the CALVIS dataset is downloaded. The dataset is placed in the ./project/folder. The front and the side images of the 3D human is captured as a scene and stored under ./data folder
python capture_images.py --resolution 512 --gender male/female --path path/to/.obj files/in/CALVIS/folder
will create scene images under data folder.
- create a train_test with 80/20 split json file in the following format
{
male:{
train:[sub_id1, sub_id2, ...],
test:[sub_id1, sub_id2, ...]
},
female:{
train:[sub_id1, sub_id2, ...],
test:[sub_id1, sub_id2, ...]
}
}.
python utils/measures_.py --path /path/to/the/obj/files --gender male/female.
Once the data is organized, we are ready for training:
python trainer.py --data_path /path/to/trainloader.npy --gender male/female --loss bce
The training will start. To customize the training, check the arguments defined in trainer.py
, and set them accordingly.
Once the training is done, extract the low embedding space of the humans by running the following:
python evaluator.py --data_path /path/to/dataloader.npy --gender male/female --mode features
Once the features are extracted run the following command to check the results:
python measurement_evaluator.py --gender female/male
Clothing Measurement Error (in mm):
male dataset | female dataset | |
---|---|---|
chest | 5.21 ± 5.23 | 3.37 ± 7.67 |
waist | 2.28 ± 2.66 | 2.29 ± 2.36 |
hip | 2.8 ± 2.66 | 2.75 ± 2.61 |
3D shape Error (in milli-units): | male dataset | female dataset | |
---|---|---|---|
per vertex error | 0.52 ± 1.01 | 0.48 ± 0.94 |
If you wanna use our code/work in your reasearch, please consider citing:
@INPROCEEDINGS{9897520,
author={Prabhu Thota, Kundan Sai and Suh, Sungho and Zhou, Bo and Lukowicz, Paul},
booktitle={2022 IEEE International Conference on Image Processing (ICIP)},
title={Estimation Of 3d Body Shape And Clothing Measurements From Frontal-And Side-View Images},
year={2022},
volume={},
number={},
pages={2631-2635},
doi={10.1109/ICIP46576.2022.9897520}}