appleseedhq / appleseed

A modern open source rendering engine for animation and visual effects
https://appleseedhq.net/
MIT License
2.2k stars 329 forks source link

Implement and benchmark Stochastic Light Culling #990

Open Narann opened 8 years ago

Narann commented 8 years ago

Direct copy of Nathan mail:

Hey guys, JCGT just published a new paper about dealing with large numbers of light sources efficiently: http://jcgt.org/published/0005/01/02/

The variation of the technique for path tracing looks nearly identical to a method I came up with a couple of years ago (http://ompf2.com/viewtopic.php?f=3&t=1938), which is really validating. It's nice that it's in a published paper now. Really exciting! :-)

I don't know about the cost of this when the count of lights in the scene is low.

cessen commented 8 years ago

As I mentioned on the mailing list, upon reading the paper more thoroughly it appears the technique in the paper isn't actually quite the same as the one I came up with a couple of years ago. The technique I came up with is, I think, better suited to Appleseed because it doesn't require multiple passes to converge to a consistent result.

I don't know about the cost of this when the count of lights in the scene is low.

Based on my implementation in Psychopath: the performance cost is negligible, and even for as few as two light sources it reduces noise noticeably. I expect that would be true of an implementation in Appleseed as well.

Narann commented 8 years ago

I'm very impressed by your technique and really hope it could appear in appleseed or, even better, you could write a paper on it.

From your perspective, in which situation your approach could be less efficient? I'm always interested by "worse case scenario".

cessen commented 8 years ago

The cases that Psychopath currently handles poorly are:

However, I'm pretty sure both of these cases are addressable. It just needs further R&D.

There are also some pathological cases involving things being in shadow. The light tree traversal is used to choose the light to sample, so it happens before shadowing is calculated and therefore necessarily can't account for shadowing in the importance sampling. So, for example, if you have a closed box with a super bright light source inside, that bright light will "steal" samples from other light sources causing nearby surfaces to be more noisy.

But it's worth noting that even in that case, the over-all noise of the scene is still being reduced and "evened out", because inside that box the variance is being reduced more than the variance is being increased outside of the box. Or, in other words, if the box were open so the camera could see inside it, the technique would still reduce the total samples needed to converge to a noise-free image. So in some sense it's still doing "the right thing", but because the camera can't see inside the box it ends up being worse for that particular viewpoint.

There very well may be other cases that I just haven't thought of or run into. This is part of why implementing it in Appleseed will be useful, because we can get more people playing with it and finding what its weak points are!

In general, as with any importance sampling technique, the more different the sampling distribution is from the actual function you're trying to sample, the worse the result. And the closer it is, the better the result. So pretty much any time the tree and tree traversal do worse than a completely random choice, you're going to see a worse result. So it's where the approximation breaks down really badly (so badly that it's worse than random) that we'll see problems.

dictoon commented 8 years ago

Thanks for the detailed explanation. I'm looking forward to play with this in appleseed.