BachiLi / redner

Differentiable rendering without approximation.
https://people.csail.mit.edu/tzumao/diffrt/
MIT License
1.39k stars 139 forks source link

Question about supported light type #27

Closed wylighting closed 5 years ago

wylighting commented 5 years ago

Hi, Thanks for your awesome work! I have some question about light type in this renderer. I noticed the renderer does not support pure point light which is referenced in 5.2 of the paper, and there is only areaLight type in this implementation. But I don't really understand why this renderer has limit on light type? Are other light types (e.g. spotLight, direcional light) not supported by this method? Could you give some brief explanation about the problem or difficulty of using point light?

Thanks a lot!

BachiLi commented 5 years ago

Point light sources introduce extra Dirac delta into the path integrals, and the math in the paper is not really compatible with it since multiplication of Dirac deltas is not a well-defined operation. For similar reason we don't support pure specular BRDFs either.

In practice, the problem is that in a unidirectional path tracing framework, the sharp shadow caused by the point light source has zero probability of being sampled (the chances that you exactly hit the shadow boundary through path tracing is zero).

A possible solution is to use light tracing or bidirectional path tracing: you can trace a path from the point light source to a triangle edge to create a light path. However handling multiple importance sampling and change of measure correctly can be tricky. This is certainly one interesting future research direction. Another possibility is to relax the Dirac delta like people did with photon mapping/photon beams.

Another possible trick is to use shadow map: you can render with point lights using the deferred shading trick ( https://github.com/BachiLi/redner/wiki/Tutorial-4%3A-fast-deferred-rendering ), and also render a depth map from the light source as a shadow map. If you use typical depth test for shadow mapping, this is still not differentiable, because there is a binary step function of hit or not hit. However you can replace the depth test with a soft visibility function (say, sigmoid). I haven't tried this so I would love to hear if anyone implement this.

Sorry for lots of technical jargons. Let me know if this is unclear.

wylighting commented 5 years ago

Sorry for comming late.. But thanks a lot for your reply and careful explanation about my question! It really helps me !!!
I have another question: if we fix camera & geometry & lighting, only the material is to be optimized. There is probably no discontinuous visibilty problem, because result image varies smoothly in process of optimizing materials, so can we use orignal area sampling without edge sampling technique in this scenario? BTW, is there a convient way i can on/off edge sampling in the code for testing? Thanks !!!

BachiLi commented 5 years ago

You are correct. We have a paper on this recently: https://niessnerlab.org/projects/azinovic2019inverse.html (also check out works by Gkioulekas, e.g. http://www.cs.cornell.edu/projects/translucency/#acquisition-sa13, and some Cornell people, e.g. http://www.cs.cornell.edu/projects/ctcloth/#matching-cloth ). It also depends on your material model: if you have discontinuities in your procedural shader, then you need some kind of edge sampling or prefiltering. Index of refraction of dielectric materials might also cause discontinuities.

To turn off edge sampling, I would add "return;" at these two lines: https://github.com/BachiLi/redner/blob/master/edge.cpp#L235 https://github.com/BachiLi/redner/blob/master/edge.cpp#L644 I might add a flag to do this in the future.

wylighting commented 5 years ago

Wow ! cool !!!

BachiLi commented 5 years ago

Closing this issue. Feel free to reopen if there are more questions.