Open oijoijcoiejoijce opened 1 year ago
Hi, thanks for your interest in this work! Our method is a general strategy for accelerating inverse rendering, although the current implementation is limited to materials in physically-based pipelines.
If you simply want to, for example, use it recover texture maps for a static face for view synthesis, this Mitsuba implementation should work (although you'll need multiple views, which I have not released in this public version yet).
If you want to instead use the method in the paper you mentioned, our algorithm should work on the neural textures as well, although it may require some additional work. The key idea is that our method accelerates inverse rendering by reusing derivatives of the loss with respect to texels. In that paper, this would be the derivative of the loss with respect to the neural feature texels.
Super cool work; an earlier paper (deferred neural rendering: image synthesis using neural textures) did this for talking heads, any insights on how to use your code/methodology to render photorealistic portrait?