DavidBoja / pose-independent-anthropometry

[ECCV 24 wksp] Pose-independent 3D Anthropometry from Sparse Data
7 stars 0 forks source link
3d 3d-landmarks anthropometry body body-measurements human landmarks measurements pose-independent smpl smplx

Pose-independent 3D anthropometry

This Github presents the code for the following paper: "Pose-independent 3D Anthropometry from Sparse Data" presented at ECCV 2024 workshop "T-CAP 2024 Towards a Complete Analysis of People".

TL;DR : Estimate 11 body measurements from 70 body landmarks of a posed subject.



πŸ”¨ Getting started

You can use 🐳 docker to facilitate running the code. After cloning the repo, run in terminal:

cd docker
sh build.sh
sh docker_run.sh CODE_PATH DATA_PATH

by adjusting the CODE_PATH to the cloned pose-independent-anthropometry directory and DATA_PATH to the data directory you want to access in the docker container. This creates a pose-independent-anthropometry-container which you can attach to by running:

docker exec -it pose-independent-anthropometry-container /bin/bash

🚧 If you do not want to use docker, you can install the docker/requirements.txt into your own environment. 🚧


Download:

Finally, initialize the smpl-anthropometry submodule by running:

git submodule update --init --recursive



πŸ’» Datasets

The datasets used in the paper are either based on CAESAR, DYNA or 4DHumanOutfit:

Once you obtain all of the datasets, we provide scripts to create all of the dataset versions used in the paper.

CAESAR dataset preprocessing

The dataset structure assumed is the following:

{path/to/CAESAR}/Data AE2000/{country}/PLY and LND {country}/

which contains scans in .ply.gz format and landmarks in .lnd format, and country can be any of the following: Italy, North America or The Netherlands.


You also need the SMPL fittings to the scans (both the parameter fittings and the vertex fittings) in format

{path/to/fitting}/{subject_name}.npz

To create the fittings in this format you use the SMPL-Fitting repository and run:

python fit_body_model.py onto_dataset --dataset_name CAESAR

python fit_vertices.py onto_dataset --dataset_name CAESAR --start_from_previous_results <path-to-previously-fitted-bm-results>

where <path-to-previously-fitted-bm-results> is the path to the fitted SMPL body model from the first line of code.

Finally, adjust the following paths in configs/config_real.yaml:


Training data

To create the training dataset, first complete the steps from CAESAR preprocessing. Then, you need to create the poses used for training by running:

python dataset.py cluster_dataset 
cd scripts
python create_training_poses.py --fitted_bm_dir <path/to/fitted/SMPL/to/CAESAR> 

where fitted_bm_dir is the path to the fitted SMPL body model to the CAESAR scans (see CAESAR preprocessing)

Finally, you can create the training data using:

cd scripts

python create_CAESAR_POSED_train_dataset.py --save_to <path/to/save/the/dataset/to>

where save_to is the path where you want to save the created dataset to.


Validation data

To create the validation dataset, first complete the steps from CAESAR preprocessing. Then, you can run:

cd scripts

python create_CAESAR_POSED_val_dataset.py --save_to <path/to/save/the/dataset/to>

where save_to is the path where you want to save the created dataset to.


Testing data

🧍🏽 CAESAR A-pose (Tables 1 & 2 left part)

To create the dataset, first complete the steps from CAESAR preprocessing. Then, you can run:

cd scripts

python create_CAESAR_APOSE_test_dataset.py --save_to <path/to/save/the/dataset/to>

where save_to is the path where you want to save the created dataset to.


🧍🏽 CAESAR A-pose with noisy landmarks (Tables 1 & 2 right part)

To create the dataset, first complete the steps from CAESAR preprocessing. Since the noise is added randomly, we provide the vector displacements from the original landmarks in data/processed_datasets/dataset_test_unposed_noisy_displacements. To obtain the noisy landmarks you need to add the displacements to the original landmarks provided in the CAESAR dataset as:

 ceasar_noisy_landmarks = caesar_landmarks + displacement_vector

If you, however, want to create your own noisy dataset, you can run:

cd scripts

python create_CAESAR_NOISY_test_dataset.py --save_to <path/to/save/the/dataset/to>

where save_to is the path where you want to save the created dataset to.


πŸͺ‘ CAESAR sitting B-pose (Table 3)

Because the sitting B-pose in CAESAR does not have all of the landmarks necessary to run our method, we transfer the missing landmarks using the fitted SMPL body model. To transfer the landmarks, run:

cd annotate

python annotate_CAESAR_landmarks.py --caesar_path <path/to/CAESAR> --fitting_path <path/to/fitted/SMPL/to/scans> --save_to <path/to/save/the/landmarks/to> 

cd ..

Then, you can create the dataset with:

cd scripts

python create_CAESAR_SITTING_test_dataset.py --save_to <path/to/save/the/dataset/to> --transferred_landmark_path <path/to/transferred/landmarks>

where save_to is the path where you want to save the created dataset to and transferred_landmark_path is the path where you saved the transferred landmarks from the code above.


πŸ’ƒ CAESAR arbitrary pose (Table 4)

To create the dataset, first complete the steps from CAESAR preprocessing. Then, you can run:

cd scripts

python create_CAESAR_POSED_test_dataset.py --save_to <path/to/save/the/dataset/to>

where save_to is the path where you want to save the created dataset to.


πŸ‘―β€β™€οΈ DYNA dynamic sequence (Table 5)

Download the dataset from here (you will need to sign up). You only need the dyna_male.h5 and dyna_female.h5 files.


πŸ•Ί 4DHumanOutfit clothed sequences (Table 6)

The dataset structure assumed is the following:

{path/to/4DHumanOutfit}/{subject_name}/{subject_name}-{clothing_type}-{action}/*/model-*.obj

After you get the dataset, you can use:

cd scripts
bash unzip_4DHumanOutfit_scans.sh <path/to/4DHumanOutfit> <unzip/destination/path>

to unzip the dataset, where <unzip/destination/path> is the folder where you want to unzip it.

The following subjects are used:

ben
leo
mat
kim
mia
sue

in tight clothing, performing the following actions:

dance
run
avoid

The scans provided by 4DHumanOutfit are the ones with resolution OBJ_4DCVT_15k.

The dataset also comes with the fitted SMPL parameters (upon request) using the approach from [3], in the same format as the provided scans:

{path/to/fittings}/{subject_name}/{subject_name}-{clothing_type}-{action}/{parameter}.pt

where {parameter} is any of the follwing: betas.pt,poses.pt and trans.pt.

Finally, you can obtain the landmarks by running:

cd annotate

python annotate_4DHumanOutfit_landmarks.py --scan_paths <path/to/4DHumanOutfit> --fit_paths <path/to/fittings> --transfer_method simple

where

The landmarks will be saved in the same folder as the parameters fit_paths and the transfer_method used in the paper is the simple one.


πŸ‹οΈ Training

To train our model you can run:

python train.py

To visualize the loss curves during training run in a seperate terminal:

visdom -p <port>

and navigate in your browser to http://localhost:<port>/. If you do not see any curves, make sure you choose the lm2meas environment in the dropdown menu and make sure the port you choose is the same one as in configs/config_real.yaml under visualization/port.


The training parameters are set in configs/config_real.yaml. To train the same model as in our paper, you can leave all the parameters as they are except potentially fixing the paths defined throught the configuration file. We briefly explain all the parameters for easier reference:

general parameters:

visualization parameters:

learning parameters:

paths parameters:

model_configs parameters:

dataset_configs parameters:

feature_transformers parameters:

learning_rate_schedulers parameters:

weight_init_options parameters:


Note that many of the parameters are not necessary to successfully train the model.



πŸ’― Evaluation

You can use evaluate.py to reproduce the results from the paper. We provide our trained model in results/2024_07_11_09_42_48.

The dataset_path input variable to the evaluate.py script should correspond to the paths of the datasets you created in πŸ’» Datasets. If you used our default paths, then you can omit it in the following calls.

🧍🏽 CAESAR A-pose (Tables 1 & 2 left part)

python evaluate.py CAESAR_STAND -R results/2024_07_11_09_42_48 --dataset_path <path/to/dataset>


🧍🏽 CAESAR A-pose with noisy landmarks (Tables 1 & 2 right part)

python evaluate.py CAESAR_NOISY -R results/2024_07_11_09_42_48 --pelvis_normalization --dataset_path <path/to/dataset>


πŸͺ‘ CAESAR sitting B-pose (Table 3)

python evaluate.py CAESAR_SIT_TRANS_BM -R results/2024_07_11_09_42_48 --dataset_path <path/to/dataset>


πŸ’ƒ CAESAR arbitrary pose (Table 4)

python evaluate.py CAESAR_POSED -R results/2024_07_11_09_42_48 --dataset_path <path/to/dataset>


πŸ‘―β€β™€οΈ DYNA dynamic sequence (Table 5)

python evaluate.py DYNA_POSED -R results/2024_07_11_09_42_48 --dataset_path <path/to/dataset> --pelvis_normalization


πŸ•Ί 4DHumanOutfit clothed sequences (Table 6)

python evaluate.py 4DHumanOutfit -R results/2024_07_11_09_42_48 --pelvis_normalization --parameters_path <path/to/params>

where parameters_path is the path to the SMPL parameters fitted to the scans along with the obtained landmarks from πŸ’» Datasets.



0️⃣ Baseline models

In order to evaluate the baseline models described in the paper on a given dataset, first you need to fit the SMPL body model onto the provided landmarks by running:

cd scripts
python add_shape_to_dataset.py --dataset_path <path/to/dataset>

after which you can evaluate it with:

python evaluate_baseline.py --dataset_path <path/to/evaluation/dataset>


1️⃣ Running [4]

To run the method from [4], you can clone their repository Landmarks2Anthropometry and switch to the eccv24 branch where the authors provide the scripts to run their method on the datasets from the paper. The datasets are set with the variable dataset_path and correspond to the ones created in πŸ’» Datasets.

To evaluate on the CAESAR A-pose, run:

python eccv_stand.py --dataset_path <path/to/CAESAR/Apose/dataset>

To evaluate on the CAESAR A-pose with noisy landmarks, run:

python eccv_noisy.py --dataset_path <path/to/CAESAR/Apose/noisy/dataset>

To evaluate on the CAESAR sitting B-pose, run:

python eccv_sit.py --dataset_path <path/to/CAESAR/Bpose/dataset>

To evaluate on the CAESAR arbitrary pose, run:

python eccv_posed.py --dataset_path <path/to/CAESAR/posed/dataset>



πŸ“ Notes

Subjects

The subjects we use for training, validation and testing are the same ones as used in [1], excluding the ones that have missing landmarks or measurements. See paper for more details.

Latex tables

We provide the latex tables from the paper so you can easily compare with our model:

Find pose invariant features

We already provide the pose-invariant features in data/landmarks2features/lm2features_distances_grouped_from_SMPL_INDEX_LANDAMRKS_REVISED_inds_removed_inds_with_median_dist_bigger_than_one.npy.

To recreate these features or create others, you can run:

cd scripts
python find_pose_invariant_landmark_features.py --caesar_dir <path/to/caesar/dataset>

Landmark-measurement ambiguity

To create Figure 4 from the paper you can run:

cd scripts
python find_landmark_measurements_ambiguity.py

which will create a figure named ambiguity_max_landmarks_wrt_measurements.pdf.

πŸ“Š Dataset statistics

To find out the average displacement of each landmark in the Noisy CAESAR dataset run:

python compute_stats.py NoisyCaesar

To find out how many and which landmarks are missing in the original CAESAR sitting dataset, run:

python compute_stats.py CAESAR_SITTING

References

Parts of the code are inspired from smplify-x and 3D-CODED. We thank the authors for providing the code.


[1] Tsoli et al.: "Model-based Anthropometry: Predicting Measurements from 3D Human Scans in Multiple Poses"
[2] Pavlakos et al.: "Expressive Body Capture: 3D Hands, Face, and Body from a Single Image"
[3] Marsot et al.: "Representing motion as a sequence of latent primitives, a flexible approach for human motion modelling"
[4] Bojanić et al.: "Direct 3D Body Measurement Estimation from Sparse Landmarks"