qianlim / SCALE

Official implementation of "SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements", CVPR 2021 https://arxiv.org/abs/2104.07660
https://qianlim.github.io/SCALE
Other
148 stars 12 forks source link

question: how to run on custom data? #6

Open codesavory opened 3 years ago

codesavory commented 3 years ago

hello, I have a custom clothed mesh and a fitted SMPL model of this mesh. I now want to generate the animations as shown in the example on this custom mesh. Should I retrain the model to work on this new data? If so how(the demo contains multiple npz files in test, train, and val but I have only one)? or Can I use the pre-trained model to run on custom meshes and generate the desired clothed animations. Thank you in advance

qianlim commented 3 years ago

Hi, the model is subject-specific, e.g. the pre-trained skirt model will always generate skirt results in the given pose. So yes you need to retrain on the new data. Please refer to the lib_data folder for the codes and instructions on how to process your own data.

AstitvaSri commented 3 years ago

@qianlim If I have a single pair of clothed mesh and its corresponding fitted unclothed SMPL mesh, can I then train the model with this single sample and can it generalize well to any unseen pose of the same SMPL and gives clothing deformations as per the same clothed mesh used in training? In short, is training using a single pose sufficient to generalize well on various unseen poses? If not, then what is the recommended amount of training, testing & validation samples required to yield a good model?

qianlim commented 3 years ago

Hi, training on a single pose won't make the model generalizable to unseen poses (as in this case the model doesn't have the knowledge how the shape should deform as the pose changes). The number of train/test examples we used in the paper depends on each outfit type in the CAPE data, but in general they are in the order of ~10^3 training frames that covers a sufficient amount of different poses (see the paper appendix A.3).