Open Shrilliant opened 1 year ago
There's already a three.js example here
https://github.com/greggman/dekapng/blob/main/examples/three.js/renderer3d.js
You can see it's generally straight forward. You render your scene with certain camera settings to render just a portion of the larger image. Then pass the data to the library.
You can see it asking for chucks here
Also see #2
Thank you for these resources!
I saw the example in THREE, though I wasn't sure that it was going to work with a renderer that uses raytracing to create images since the scene will become segmented once the PNG generation process begins (?). Given how raytracing takes a lot of time to finish rendering an image, I thought that maybe that meant that the image/scene would have to be re-rendered for each new segment, which would mean some kind of async process to wait for the image/scene/segment to finish raytracing the required number of samples.
Am I mistaken to say that this would be true for your library given how it generates output images?
I'll look into THREE.Highres as well! Thank you again.
[edit] wait duh, I'll look into the example you sent. Thank you so much!
The example is already async. You could easily make it more async. Just declare drawArea
to be async and await it in the loop that's calling it. Then drawArea
can render as many frames as you want it to to finish that area.
whether or not it will work for you I have no idea. You can watch Blender render with raytracing or pathtracing or something and it clearly renders in segments but no idea if three-gpu-pathtracing will handle this correctly.
If they is any post processing I'd expect some things to require fixing. For example a vignette post processing effect is probably not designed to run in segments. A scanline post processing effect is also unlikely be designed to run in segments.
Pathtracing renderers often have a screen-space blur process. For that to work and not leave artifacts when segmenting you might have to render slightly larger segments and the read the middle. In other words, if the segment requested is 300x150 you might need to render 320x170 and pull out the 300x150 center (assuming the blur needs 10 pixels on each side to properly blur).
Hello!
I'm working with this renderer https://github.com/gkjohnson/three-gpu-pathtracer
for THREE to render images in the browser as raytraced images. Of course you can imagine I wanted to export them as gigantic images and absolutely fry my computer and the computer of anyone who wanted to export them, but I just discovered the drawing buffer limit on chrome and found your fabulous library as a result.
Since your library is using chunks, I imagine that each chunk would have to be re-rendered for the png to appear properly stitched together? I was wondering if you could possibly help with this issue or provide some guidance (or maybe even an example if you're feeling a little wild)
Please let me know, and thank you for creating this! It's solving some problems :)