gaolinorange / Automatic-Unpaired-Shape-Deformation-Transfer

SIGGRAPH ASIA 2018
Other
248 stars 44 forks source link

Pre-trained Weights #6

Open aniskacem opened 5 years ago

aniskacem commented 5 years ago

Dear authors,

Thanks for providing an open source implementation for your interesting paper.

Please would it be possible to provide the pre-trained weights as well?

Many thanks in advance!

Best regards,

ArashHosseini commented 5 years ago

+1

ArashHosseini commented 5 years ago

@aniskacem did you find any solution or trained by your self?

aniskacem commented 5 years ago

I tried to train it by myself but I noticed that The codes are incomplete. They don't even provide an implementation for the GAN part of the method.

tommaoer commented 4 years ago

@aniskacem Thanks for your comments. We missed this part and forgot to update it. In recent days, we update the files in the folder './python' and will continue to work on the code to make the code easier to use and read. In the meanwhile, we also upload a demo and pre-trained weights of Fig.22 in google drive. If you have more problems, please feel free to contact me.

sjz-suyi commented 4 years ago

I tried to train it by myself but I noticed that The codes are incomplete. They don't even provide an implementation for the GAN part of the method.

Did you succeed?

jeff-rp commented 4 years ago

Hello tommaoer, Thank you for sharing demo data. I tried to load the demo network and run python test_vae, test_metric, test_gan successfuly. However when I want to reconstruct geometry using recon_from_vae.m, I need the base obj file. I guess the demo data is from (https://people.csail.mit.edu/sumner/research/deftransfer/data.html) but the vertex counts in horse and camel model are different from horse.mat and camel.mat. So I wonder what base obj should I provide to recon_from_vae.m to verify result?

Many thanks and best regards

tommaoer commented 4 years ago

Yes, the number of vertexes is different from Sumner's data. Because of the RAM of Graphics Cards, we simplified the original meshes. Then, we have updated the 'demo data' on the shared google drive link. Please check it. @jeff-rp

jeff-rp commented 4 years ago

Yes, the number of vertexes is different from Sumner's data. Because of the RAM of Graphics Cards, we simplified the original meshes. Then, we have updated the 'demo data' on the shared google drive link. Please check it. @jeff-rp

It works! Thank you very much

jeff-rp commented 4 years ago

I have a question about the feature vector of input and output. In model.py, there is a flag "useS" control the size of feature (3 or 9). The demo network use size 3 feature (no "FS" feature). What's the key of choosing size of feature? and for the case of using size 3 feature, what "FS" should I provide to geometry reconstruction function?

Thank you

tommaoer commented 4 years ago

Ok. The feature 'FS' is the scaling/shearing information of the model refers to the reference model. It is calculated by polar decomposition. And the demo data is the different poses of the reference model, so the scaling/shearing information refer to the reference model is insignificant in the proportion of features of models, which helps the network to learn the data easily. You can adjust the flag 'useS' to suit your data by yourself. And for the details of the ACAP feature(LOGR and S), you can refer to the reference[Gao et al. 2017, Sparse data driven mesh deformation]. @jeff-rp

jeff-rp commented 4 years ago

Ok. The feature 'FS' is the scaling/shearing information of the model refers to the reference model. It is calculated by polar decomposition. And the demo data is the different poses of the reference model, so the scaling/shearing information refer to the reference model is insignificant in the proportion of features of models, which helps the network to learn the data easily. You can adjust the flag 'useS' to suit your data by yourself. And for the details of the ACAP feature(LOGR and S), you can refer to the reference[Gao et al. 2017, Sparse data driven mesh deformation]. @jeff-rp

Thanks for your reply. I'll check it.

jeff-rp commented 4 years ago

I tried the GAN part of pre-trained network, and ran test_gan.py successfly. here is my result (also the image at bottom). Left column shows horse poses (source), and right column shows corresponding camel poses transferred from horse( A Gen B).

However the camel poses is not quite similar to corresponding horse poses? The top camel pose has strange twist legs but the source horse just standing normally. I am not sure this is the correct result or I missed something?

Best regards horse-camel

tommaoer commented 4 years ago

Yes. This is because the lightfield metric is very not reliable occasionally. You can check it by using the lightfield metric to query the nearest models. But I think you can adjust the weight of the SimNet in the Gan part. The SimNet can lead the Gan part to learn the mapping that is suitable for the lightfield distance, but we do not have over-reliance on the distance metric. So you can adjust the weight in the Gan Part. Recently, we also improve the performance of the simnet by using some reliable metric (combined with the ICP) or developing some new algorithms. @jeff-rp

jeff-rp commented 4 years ago

In "train_gan.py", the inputs of "train_op_g" named "_model.random_a", "_model.random_b" are fed with Gaussian random numbers. However I think they should be fed with the outputs of "z_mean_test_a" and "z_mean_test_b" which are meaningful samples in latent space? because I checked the inputs of trainable generator functions are simply placeholder "random_a", and "random_b".

miaoYuanyuan commented 3 years ago

@aniskacem Thanks for your comments. We missed this part and forgot to update it. In recent days, we update the files in the folder './python' and will continue to work on the code to make the code easier to use and read. In the meanwhile, we also upload a demo and pre-trained weights of Fig.22 in google drive. If you have more problems, please feel free to contact me.

is this pre-trained weights can use in human deformation transfer ?