gkjohnson / three-gpu-pathtracer

Path tracing renderer and utilities for three.js built on top of three-mesh-bvh.
https://gkjohnson.github.io/three-gpu-pathtracer/example/bundle/index.html
MIT License
1.33k stars 132 forks source link

Investigate Looking Glass Display Rendering #284

Closed gkjohnson closed 1 year ago

gkjohnson commented 1 year ago

Plan

Questions

BryanChrisBrown commented 1 year ago

Some more context.

The Looking Glass display works by rendering from many different perspectives simultaneously, this would be pretty inefficient for path tracing techniques since it exponentially increases the number of samples.

However, there are some techniques, like those listed above that allow you to ray trace directly to the display itself, without having to have an intermediate step of rendering 45-100 different perspectives.

The Looking Glass WebXR Library provides a method of rendering to the display via the WebXR API, however this API isn't meant for path/raytracing traditionally.

It'd be interesting to explore how we could expose functionality on the WebXR side of things to allow it to be leveraged without adding custom device code itself.

gkjohnson commented 1 year ago

@BryanChrisBrown looking ahead to this a little bit - what method do you recommend for rendering here? From a quick look the approach used in the shadertoy seems most straight forward with just having to compute the interleaved ray. Are there any drawbacks to using this interleaved method? With the pathtracer we need to accumulate samples over multiple frames into a single buffer so this seems straight forward.

Also are there any docs on the logic for computing the camera ray used in the interleaved approach from the ShaderToy examples? It's easy to get lost in it without more context on how the rendering tech works for the device.

BryanChrisBrown commented 1 year ago

Hey Garret!

I'm looking into how much internal documentation we can share regarding your query there.

While I'm waiting for that though I'm curious if it may be easier to use the Looking Glass WebXR library, which provides a quilt based approach to generating holograms for the Looking Glass and handles most of the device specific logic for you.

gkjohnson commented 1 year ago

While I'm waiting for that though I'm curious if it may be easier to use the Looking Glass WebXR library, which provides a quilt based approach to generating holograms for the Looking Glass and handles most of the device specific logic for you.

Thanks! The quilt method would work easily but my impression was that that was more resolution limited, right? At least I recall reading that somewhere, I think.

Do the ShaderToy examples use the WebXR API? For WebXR my current roadblock is understanding how we can render cumulative path tracing to it since it requires rendering to an intermediate float render buffer first. Is this something that's possible with WebXR?

gkjohnson commented 1 year ago

Some progress on a quilt renderer:

image
BryanChrisBrown commented 1 year ago

Nice work so far! To answer your question about shadertoy, they currently use WebVR, I don't believe they've updated to use WebXR directly yet.

For the Looking Glass WebXR Library, I suppose it should be possible to render directly to the float-based framebuffer first, then copy that framebuffer over to the canvas on the device. I'm not quite sure how that'd work since raytracing isn't my main area of work, but happy to help out when it comes to it!

gkjohnson commented 1 year ago

I'm not quite sure how that'd work since raytracing isn't my main area of work, but happy to help out when it comes to it!

I think the only important thing to note here is that with the way the pathtracer works the samples must be rendered and blended into an intermediate floating point render target. All the ray transformations and geometry intersections happen in the shader so the only actually geometry rendered is a full screen quad in order to write to that intermediate buffer.

I suppose it should be possible to render directly to the float-based framebuffer first, then copy that framebuffer over to the canvas on the device.

I'm less familiar with the details of the WebXR API and how it's working under the hood so this is the part I'm struggling to see how to do right now. From looking at the three.js source it looks like it just renders every XR camera to a frame buffer with a viewport provided by the API? In this case maybe it is as easy as copying the quilt? It's just not clear how to do that in three.

376 is the PR with the quilt renderer and new demo. Here's a scaled screenshot of quilt rendered out, as well, with multiple cameras and (I believe) correct off-axis frustums:

image

BryanChrisBrown commented 1 year ago

I think the only important thing to note here is that with the way the pathtracer works the samples must be rendered and blended into an intermediate floating point render target. All the ray transformations and geometry intersections happen in the shader so the only actually geometry rendered is a full screen quad in order to write to that intermediate buffer.

I suppose it should be possible to render directly to the float-based framebuffer first, then copy that framebuffer over to the canvas on the device.

So that's pretty much how our WebXR implementation works. In WebXR, a WebGL scene renders to a XRBaseLayer defined here: https://developer.mozilla.org/en-US/docs/Web/API/XRRenderState, this is part of the base XRSession that's created when a WebXR scene is initialized.

In our Looking Glass WebXR Library we draw to that baseLayer object, then copy it over to display with the shader modifications needed to display the quilt accurately.

On our end, this process is initialized here: https://github.com/Looking-Glass/looking-glass-webxr/blob/c070e8f54e7d750fecd1ecbbcfea8a52bd6aa72f/src/LookingGlassXRDevice.ts#L88-L144

and calls functions here: https://github.com/Looking-Glass/looking-glass-webxr/blob/main/src/LookingGlassXRWebGLLayer.ts

This works currently in Three, Babylon, and PlayCanvas, so it's fairly engine-agnostic which is part of the joy of WebXR, it's a pretty effective abstraction for custom rendering pipelines!

I'm curious if there's anything we'd need to expose from the library for you to be able to take advantage of this, based on the need to be able to update the framebuffer as a whole, we could provide access to that element for you to update from within GPU PathTracer. Hopefully this would reduce the need of Looking Glass Specific code you'd need on your end!

I'll take a look at the PR this coming week! Very excited to try it out!

376 is the PR with the quilt renderer and new demo. Here's a scaled screenshot of quilt rendered out, as well, with multiple cameras and (I believe) correct off-axis frustums:

image

At first glance your off-axis calculations look correct to me! Looks good in a Looking Glass! It might need to be focused a bit more on the character to give it a bit more sharpness in the display, but this is looking really good! I'm still amazed at how well the path tracer handles refraction and transparency.

I've uploaded the quilt to blocks.glass so folks following the thread can view it on their Looking Glass!

gkjohnson commented 1 year ago

It might need to be focused a bit more on the character to give it a bit more sharpness in the display ... I've uploaded the quilt to blocks.glass so folks following the thread can view it on their Looking Glass!

Heh well that image was downresed by almost 1/3 so Github would let me upload it πŸ˜… Here's a zip of the full scale image which should look better (though I haven't taken a look at it on the device, yet. Once it can be adjusted in real time it should be easier to hone in on a good model placement:

lego-quilt.zip

In our Looking Glass WebXR Library we draw to that baseLayer object, then copy it over to display with the shader modifications needed to display the quilt accurately.

Got it - so I've gotten to a point where I can render the quilt to the canvas while XR is enabled but it's still not displaying correctly. It turns out I had to set renderer.xr.enabled = false while copying the quilt to the canvas to avoid three.js hijacking and replacing the camera with the array of views which doesn't work for a full screen quad.

The quilt I'm rendering is set to 3360x3360 resolution and 8x6 tiles which means each tile is 420x560. The quilt is then written directly to the canvas after which the LKG plugin swizzles the render. Here's what I'm seeing the LKG and the popup window:

I'm curious if there's anything we'd need to expose from the library for you to be able to take advantage of this, based on the need to be able to update the framebuffer as a whole, we could provide access to that element for you to update from within GPU PathTracer. Hopefully this would reduce the need of Looking Glass Specific code you'd need on your end!

Thanks! I'll make a write up of some of the things that could be improved but otherwise the camera logic isn't too complicated.

BryanChrisBrown commented 1 year ago

Heh well that image was downresed by almost 1/3 so Github would let me upload it πŸ˜… Here's a zip of the full scale image which should look better (though I haven't taken a look at it on the device, yet. Once it can be adjusted in real time it should be easier to hone in on a good model placement:

lego-quilt.zip

Looks much better at full res! Just updated the block!

Got it - so I've gotten to a point where I can render the quilt to the canvas while XR is enabled but it's still not displaying correctly. It turns out I had to set renderer.xr.enabled = false while copying the quilt to the canvas to avoid three.js hijacking and replacing the camera with the array of views which doesn't work for a full screen quad.

This is really interesting, clever hack! I'm surprised that works, but neat! πŸ˜…

The quilt I'm rendering is set to 3360x3360 resolution and 8x6 tiles which means each tile is 420x560. The quilt is then written directly to the canvas after which the LKG plugin swizzles the render. Here's what I'm seeing the LKG and the popup window:

So, I believe this is due to the WebXR Library generating a 4096 x 4096 framebuffer for the portrait, so it's trying to render a 4096 x 4096 framebuffer with a smaller quilt in it.

I'm working on a new release (available on npm here or github here. which uses the proper device settings instead of relying on the auto-configuration of the framebuffer. It also comes with some other nice goodies like auto-windowing in chrome! (v100+).

Curious to see if this fixes it for you!

It also adds screenshot and media-stream functionality (still very much in testing)

gkjohnson commented 1 year ago

So, I believe this is due to the WebXR Library generating a 4096 x 4096 framebuffer for the portrait, so it's trying to render a 4096 x 4096 framebuffer with a smaller quilt in it.

Thanks - I've taken a look at the webxr package to see how the buffer size is generated and tried to replicated it. I've also verified that the subframes that are specified by the camera are the same as the ones generated by my quilt system but still no luck even when rendering the quilt to an intermediated 4096x4096 buffer and then rendering with my WebXR hack:

I'm working on a new release (available on npm here or github here. which uses the proper device settings instead of relying on the auto-configuration of the framebuffer. It also comes with some other nice goodies like auto-windowing in chrome! (v100+).

It would be nice if I could just take a quilt image as I've generated and pass it to the library with the quilt parameters to have it display - much like the blocks.glass website seems to be able to do. Is that what you mean by media streaming? Otherwise I'll probably wait until something like that is available - hacking around three.js' WebXR implementation for this use case to get the pathtracing displaying definitely feels less than optimal. I'll merge the PR in the mean time so quilts can be generated, though. What do you think?

Also - here's a new quilt render up on blocks.glass:

https://blocks.glass/gkjohnson/5573

BryanChrisBrown commented 1 year ago

Looking fabulous! I’ll look into providing an entrypoint to the WebXR library for overriding the quilt input, I can see this being useful for some scenarios.

there is a way to send the quilt result directly to the looking glass, using our core.js library, there’s an example here on how to do this here:

https://codesandbox.io/s/detect-a-looking-glass-l5o9d

In addition you can also try out the blocks.js api we’ve got here!

https://github.com/Looking-Glass/blocks.js

gkjohnson commented 1 year ago

OKAY. I was able to get it working this morning - turns out I was using the wrong buffer dimensions when drawing a subframe. Sometimes I forget how important just getting some sleep is πŸ˜…:

And some quilts uploaded here:

https://blocks.glass/gkjohnson

there is a way to send the quilt result directly to the looking glass, using our core.js library...

I took a look at this but unfortunately it requires a completed static image. With the approach I've used you can render the quilt image on the display as path tracing samples resolve which is nice as a preview.

I feel like ideally a thre.js LKG API for a quilt would look something like this:

LookingGlassAPI.renderQuilt( renderTarget.texture, {
  rows: 5,
  columns: 9,
  views: 45,
} );

But I concede that this is probably a fairly niche use case and is nearly what I have now πŸ˜…. It's just that replicating parts of the internal LGK WebXR API is a bit brittle. But it's a nice demo for now and definitely something to look into in the future 😁

gkjohnson commented 1 year ago

The demo is merged here if you want to check it out:

https://gkjohnson.github.io/three-gpu-pathtracer/example/bundle/lkg.html