Open tomguluson92 opened 4 years ago
Hi! First of all, thank you for putting efforts in this! I'm keen to learn about this as well.
I have not tried SoftRas yet, but I would not expect it too be far worse. It can be slightly worrisome, since SoftRas softens the forward rendering to smooth the gradients, which can also lead to inconsistency and provide noisy signals for the shape. That said, I don't expect it to fail completely.
Neural Mesh Renderer (NMR) on the other hand performs exact forward rendering. However, we found gradients through the texture are noisy and unstable for training. Therefore we only use it to render the depth and use grid_sample to warp the texture, as explained in the supplementary material.
However, when I look at the results you show in your repo, it seems you were not even able to reproduce the results with NMR? Did you change anything? If not, have you trained the model sufficiently long (eg, 12+h)?
I may not be able to look into SoftRas recently, but I think investigating this issue can be rather helpful for many users. So keep us posted with any updates!
Hi! First of all, thank you for putting efforts in this! I'm keen to learn about this as well.
I have not tried SoftRas yet, but I would not expect it too be far worse. It can be slightly worrisome, since SoftRas softens the forward rendering to smooth the gradients, which can also lead to inconsistency and provide noisy signals for the shape. That said, I don't expect it to fail completely.
Neural Mesh Renderer (NMR) on the other hand performs exact forward rendering. However, we found gradients through the texture are noisy and unstable for training. Therefore we only use it to render the depth and use grid_sample to warp the texture, as explained in the supplementary material.
However, when I look at the results you show in your repo, it seems you were not even able to reproduce the results with NMR? Did you change anything? If not, have you trained the model sufficiently long (eg, 12+h)?
I may not be able to look into SoftRas recently, but I think investigating this issue can be rather helpful for many users. So keep us posted with any updates!
Thanks! The poor results may be due to the choosen of a small dataset which consists of only 40 pictures to trained on (celeba is big for me to test the effects).
I see! Have you tried to train with SoftRas on the full training set? Does it still not work well?
I see! Have you tried to train with SoftRas on the full training set? Does it still not work well?
I will give it a try and tell you my result.
I see! Have you tried to train with SoftRas on the full training set? Does it still not work well?
I will give it a try and tell you my result.
@tomguluson92 Hi, have you trained on the full training set? Are the results it better?
I see! Have you tried to train with SoftRas on the full training set? Does it still not work well?
I will give it a try and tell you my result.
@tomguluson92 Hi, have you trained on the full training set? Are the results it better?
not yet... but if you use wechat, we can communicate through that. my email is samuel.gao023@gmail.com
I see! Have you tried to train with SoftRas on the full training set? Does it still not work well?
I will give it a try and tell you my result.
@tomguluson92 Hi, have you trained on the full training set? Are the results it better?
not yet... but if you use wechat, we can communicate through that. my email is samuel.gao023@gmail.com
Hi, I use pytorch3d as well, and my wechat id is tj2014wql, hope to communicate with you! Thanks!
tj2014wql
Hi @elliottwu , any update for this topic?
Hi, wu! congratulations to you for unsup3d has been elected as the CVPR2020 best paper. Inspired from your repo, I just replace the neural renderer with pytorch3d point cloud renderer. My repo is: https://github.com/tomguluson92/unsup3D_pytorch3d But I found it is inferior to your repo, but as far as I know that SoftRas inside pytorch3d is a more powerful differentiable renderer. Do you have time in using pytorch3d and find out the difference between them? Thanks a lot!