ShichenLiu / SoftRas

Project page of paper "Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning"
MIT License
1.24k stars 156 forks source link

texture reconstruction #44

Closed tejank10 closed 3 years ago

tejank10 commented 4 years ago

Hello @ShichenLiu @chenweikai ,

Could you please provide some details about texture reconstruction? I am primarily interested in the following points:

Thanks.

ShichenLiu commented 4 years ago

Hi there,

In our experiments we used surface texture type and the Laplacian neighbor is set to the adjacent triangles in order to compute the Laplacian loss on color. We used about 10~20 as color palette size. We think relatively small size could impose the networks to learn better concept of color palette.

tejank10 commented 4 years ago

Thank you very much @ShichenLiu for clarifying the details. The texture generated is directly passed on to the renderer, or does it undergo transformation (like in case of generated displacement, which is then added to vertices)?

ShichenLiu commented 4 years ago

The texture generated is directly passed to the renderer since it is "picked" from the source images.

On Tue, Feb 18, 2020 at 10:55 PM Tejan Karmali notifications@github.com wrote:

Thank you very much @ShichenLiu https://github.com/ShichenLiu for clarifying the details. The texture generated is directly passed on to the renderer, or does it undergo transformation (like in case of generated displacement, which is then added to vertices)?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ShichenLiu/SoftRas/issues/44?email_source=notifications&email_token=ACF47YU3X7LXPBWJSG26VWTRDTJXNA5CNFSM4KWGAJMKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEMGSSCQ#issuecomment-588065034, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACF47YSF3IQBVEUD5BYLDZDRDTJXNANCNFSM4KWGAJMA .

zyz-notebooks commented 4 years ago

Thank you very much @ShichenLiu for clarifying the details. The texture generated is directly passed on to the renderer, or does it undergo transformation (like in case of generated displacement, which is then added to vertices)?

Hi, did you implement the texture reconstruction module of the project, would you mind sharing it for learning, looking forward to your reply

tejank10 commented 4 years ago

I have created a gist here regarding it. The module and function are to be placed in models.py

zyz-notebooks commented 4 years ago

Thank you very much for your reply and help

zyz-notebooks commented 4 years ago

I have created a gist here regarding it. The module and function are to be placed in models.py

hi, sorry to disturb you, you can share the color loss code, looking forward to your reply,

tejank10 commented 4 years ago

I have updated the gist with Texture Laplacian Loss, I assume that's what you mean by the color loss code. Other than this, there is L1 loss over rendered vs input RGB image whichis also a color loss.

zyz-notebooks commented 4 years ago

I get it, thank you very much

zyz-notebooks commented 4 years ago

I have updated the gist with Texture Laplacian Loss, I assume that's what you mean by the color loss code. Other than this, there is L1 loss over rendered vs input RGB image whichis also a color loss.

Thanks again for your help, I have initially achieved texture reconstruction, but there are some problems with the effect, the textures I get are almost all black and white tones, here are my train.py and model.py , can help me see my settings there Is there something wrong, looking forward to your reply!

tejank10 commented 4 years ago

Maybe this is the culprit, the losses is taken wrt images_a[-1] which is a single image. I have updated the gist with L1 loss that I had used. It is analogous to the multiview_iou_loss which is in losses.py of SoftRas.

zyz-notebooks commented 4 years ago

Maybe this is the culprit, the losses is taken wrt images_a[-1] which is a single image. I have updated the gist with L1 loss that I had used. It is analogous to the multiview_iou_loss which is in losses.py of SoftRas.

Thank you very much. At present, the texture reconstruction effect is good. I believe the code you contributed on texture reconstruction will help more people.

xiehousen commented 4 years ago

demo_0003000

Maybe this is the culprit, the losses is taken wrt images_a[-1] which is a single image. I have updated the gist with L1 loss that I had used. It is analogous to the multiview_iou_loss which is in losses.py of SoftRas.

Thank you very much. At present, the texture reconstruction effect is good. I believe the code you contributed on texture reconstruction will help more people.

hello, sorry to disturb you, can you share the code of model.py? I meet the same question as you when I recompile the loss and put the col_gen in model.py. Look forward to your reply

ShichenLiu commented 3 years ago

Provided samples of reconstructed color meshes in the repo!

zoezhou1999 commented 2 years ago

Hi @ShichenLiu I followed the code snippets provided by @tejank10 . The texture reconstruction seems to work. But I noticed that the current texture reconstruction code and the descriptions in paper supplementary materials generate batch_size x face_num x 1 x 3 textures but the saving obj of SoftRas requires texture_res>=2, so I just repeat the 2nd dim. But I think it may affect the performance? So do you mind providing some hints on your texture reconstruction implementation about this? Thank you!