Open andybak opened 1 year ago
I'm currently looking at tighter integration with the Editor. I think I might build on top of the official Unity Plugin instead to leverage some of the existing and upcoming features there.
Just thinking some more about this. BlockadeLabs API now supports generating depth maps which simplifies things.
With regard to rendering depth - have you thought about doing something at the fragment shader level instead of displacing vertices?
I presume that's what you're doing but my memory is hazy as it's a couple of months since I looked at this project.
I think Facebook uses parallax occlusion mapping or similar for their 3D photos and I've always found their rendering rather effective.
I've not found anyone using a similar technique for 360 panoramas. I can't think of a reason why it wouldn't work. Would be it more or less taxing on hardware than using a displacement shader? I'm particularly interested in mobile XR myself.
I hadn't considered it and I'm indeed just displacing vertices at the moment.
Some thoughts on performance:
The sphere I currently use does have a fairly high resolution (~16K vertices). Shouldn't be an issue for a simple viewer even for mobile XR, but if you have a lot of other things going on in the scene, you'd surely want to optimize this. Dropping the sphere resolution, thereby trading off some depth accuracy would be simple though. Trying to optimize this further I'd probably look into traditional meshing/simplification algorithms to make sure the resolution is somewhat adaptive with high resolution only used where necessary.
For doing the reprojection in the fragment shader directly: It would save you the vertex shader cost, but moving more complexity to the fragment shader might not be worth it, especially with high-resolution screens / stereo rendering. But maybe it's less complex than I think...
My gut feeling is: Not worth it for performance reasons alone. Unless vertex stage is really your bottleneck, then go for it. (insert usual yada yada about profiling before optimizing... ) But I don't want to discourage you from trying, it's an interesting approach for sure.
Another somewhat related idea I'll throw in: I've thought about using the depth to render the skybox to a high res stereo cubemap and render that using a compositor layer / OVROverlay. Probably less parallax than either of the two other approaches, but curious if there's a quality gain to be had for this type of content.
I must admit I don't have the time or inclination to attempt to implement the fragment shader idea. Your gut feeling combined with the empirical evidence (why is nobody else doing it?) - and the fact that I have a TODO list as long as a very long arm is enough to persuade me to spend my time elsewhere.
Your stereo cubemap idea is interesting. I've just added custom skybox support into Open Brush and I've added support for equirectangular stereo over/under as a format.
The effectiveness is very dependent on good source input but it's cheap and impressive when it works well.
This repo is up to date with the current API: https://github.com/CatDarkGame/AISkyboxGenerator