mmp / pbrt-v3

Source code for pbrt, the renderer described in the third edition of "Physically Based Rendering: From Theory To Implementation", by Matt Pharr, Wenzel Jakob, and Greg Humphreys.
http://pbrt.org
BSD 2-Clause "Simplified" License
4.89k stars 1.19k forks source link

Massive memory usage in SPPM example #264

Closed nkaragas closed 4 years ago

nkaragas commented 4 years ago

When I tried running the sample scene pbrt-v3-scenes/caustic-glass/f16-9c.pbrt (10,000 iterations), my system ran out of memory and crashed (on both Windows 8.1 and Ubuntu). I started keeping track of the memory usage and adding more swap space, but it was never enough, even over 100 GB of memory + swap. The other caustic-glass samples (f16-9a.pbrt with 10 iterations and f16-9b.pbrt with 100 iterations) work, although 100 iterations takes ~24.7 GB.

I've looked through the code and found the large memory usage is due to two factors:

1) The per-thread arena is passed to isect.ComputeScatteringFunctions() and UniformSampleOneLight() for allocations, but this memory is not freed between iterations so it keeps building up. One simple fix I tried is to create a separate per-thread arena for those two functions (leaving the original alone for pixel node allocations that, I assume, have to persist across iterations), and Reset() it between iterations. This helped greatly - the 100 iteration case went from ~24.7 GB to ~4.4 GB.

2) However, even with the above fix, the 10,000 iteration case still consumes more memory than I can throw at it. The rest of the memory allocations seem to be pixel nodes. I'm not yet familiar enough with the algorithm to tell if there's another way to do this without persistent pixel node allocations.

Am I doing something wrong, or is 10,000 iterations of SPPM too much for the average computer? Either way, adding the second per-thread arena should help the SPPM memory usage.

mmp commented 4 years ago

Ack! That was introduced a week or two ago in a fix to improve parallel scalability in the SPPMIntegrator. Fixed now in top-of-tree.

Thanks for reporting that!