Closed YuanxunLu closed 5 years ago
I don't see fundamental reason why this wouldn't work. If you can create a minimal example for me to debug that will be much easier.
See code at https://github.com/BachiLi/redner/issues/58 for an example of using network-generated-vertices during optimization.
I believe that could solve my problem. Thanks for your attention. Closed this issue.
Thanks for this wonderful work! I met troubles when I tried to use redner as a differentiable render module in my whole networks. Given an image I, I train an network X to predict the vertices corresponding to it. Network X includes an encoder to learn features about I to construct vertices. So, I could get
Next, I want to use redner to learn textures & lighting via rendering the vertices to 2D plane. My original assume is that: redner achieves gradients about vertices & textures & lighting from loss between images, and gradients about vertices can be back passed to my encoder X. However, I got error like
I original thought I can use redner as another part of My whole network but found I thought it too simple. I guess there's problem set in gradients BP between redner and X. I think I need to write a torch.autograd.Function wrapper API to get the gradients of vertices from RenderFunction.backward() in _renderpytorch.py and return the gradients to my network X. But I found difficulties here, I really don't know how to achieve the gradients of redner. Could you tell me how to get the gradients computed by redner? Thanks!