chunky-dev / chunky

A path tracer to create realistic images of your Minecraft worlds.
https://chunky-dev.github.io/docs
GNU General Public License v3.0
651 stars 77 forks source link

[Suggestion/Discussion] Selectively sample pixels / variable SPP #474

Closed quantumgolem closed 4 years ago

quantumgolem commented 6 years ago

This question was originally posted over here. I am simply posting it here again in hopes that the developer can weigh in on it.


(I apologize in advance for the lack of clear terminology in this question. I am not fully familiar with Chunky's terminology.)

I'm quite new to rendering worlds with Chunky. I've been trying to render a 4K image. This clearly is quite time consuming.

As far as I understand, Chunky renders an images taking a certain number of samples per pixel (SSP). One way to reduce the render time would be simply to decrease the number of pixels being rendered for the image (i.e. rendering a 1080p image). However, my question is geared more towards having a variable SPP to render images more quickly.

Multiple samples are taken for a particular pixel in order to reduce noise in darkly lit areas of the image. However, the same number of samples are taken of brightly lit areas as darkly lit areas. Brightly lit areas, as far as I understand, do not need so many samples taken of them, since they are noise free relatively quickly (after around 25 samples in my case). This means the default 1000 samples which Chunky performs would indeed make the dark areas look better, but for most of the image (which is brightly lit) this will just waste time, making those areas look hardly any better.

I've been wondering whether it is possible for Chunky to "intelligently sample" the image. This would mean sampling the pixels in the bright areas only about 25 times or so, and then sampling only the 10% of the image which is darkly lit (and ignoring the brightly lit areas) in order to render the image much faster.

Is this possible with Chunky in any way?

llbit commented 6 years ago

This has been asked before, see for example issue #150.

Here are some relevant research articles on dynamic noise detection/filtering:

quantumgolem commented 6 years ago

Thank you very much for the links and the information.

So I've been thinking about this, and as far as I can tell, the only possibility right now would be to manually zoom in on darker areas and selectively render them, and then later on stitch the images together appropriately together. Does that sound like a good idea? I will be trying it out later!

llbit commented 6 years ago

That sounds like it could work, and it's an interesting idea. One issue I can imagine is blending the separate images smoothly so that you don't get a distinct border between the zoomed in region and the rest of the image.

quantumgolem commented 6 years ago

I'd assume some panorama stitching programs could do that quite well!