DanMillerDev / ARFoundation_PostProcessing

Sample repo showing post processing in with mobile AR
MIT License
74 stars 1 forks source link

None of these post processing features are using the correct ARKit data #2

Open sam598 opened 4 years ago

sam598 commented 4 years ago

I know this is covered in the readme and on Twitter, but I wanted to clarify a few things.

ARKit 3 added additional camera intrinsic data for grain, motion blur, and camera depth of field. These values get updated with every ARKit frame. However this project is not using any of those new features. Most of these effects are being hard coded in editor.

That being said the effects here are quite beautiful, and it really gets across the idea of how great realtime compositing could look. It's just not quite correct yet.

For example camera grain should be driven by: https://developer.apple.com/documentation/arkit/arframe/3255172-cameragrainintensity Using the texture provided by: https://developer.apple.com/documentation/arkit/arframe/3255173-cameragraintexture

The motion blur shutter angle should be this value: https://developer.apple.com/documentation/arkit/arcamera/3182986-exposureduration Divided by 1/60th of a second, then multiplied by 360 degrees.

ARKit 3 has the capability to return the real camera's focus point. However Apple seems to be reserving this feature for their own RealityComposer platform. Instead this demo is taking the camera's horizontal field of view, seeing if it has changed, then adding a blur effect every few seconds.

Both ARKit and ARCore update their camera frustum every frame. They do this because mobile phone cameras have moving optical elements that are constantly stabilizing and changing focus, which affects the camera calibration. So there is nothing that would prevent this from running with ARCore, except maybe Post Processing stack compatibility.

Finally all of these effects are being added globally to the image, so there is double motion blur, double grain and double defocus being added to the background camera image. All of these effects should only be applied to the new composited elements, since the goal of all of this is to match the composited elements to the real world.

Again this is fantastic work so far. I know on twitter you had some questions about how the camera intrinsics are supposed to work, so I would be more than happy to work with you to get these right.

sam598 commented 4 years ago

So after writing this I kept thinking about it and I don’t think the AR Background Render and Post Processing stack can easily achieve the right effect.

I started writing an alternate AR compositing technique and should have something to share today.

scode-cmd commented 4 years ago

Hi sam, your approach sounds interesting. Any news on the implementation?