Unity-Technologies / arfoundation-samples

Example content for Unity projects based on AR Foundation
Other
3.02k stars 1.12k forks source link

Getting depth images from front camera on iOS? #959

Closed patrick508 closed 2 years ago

patrick508 commented 2 years ago

Hi all,

I have been following multiple tutorials to get and visualize depth images on iOS using the occlusion manager. For the back camera everything is straight forward and working as expected. If I then switch the camera to "user" I get nothing. The texture is just blank.

It seems hard to find but is getting depth through the occlusion manager even possible on the front camera on iOS(but also on android for the future)?

Below is how I currently get the depth image and visualize it

void Update()
        {
            if (OcclusionManager.TryAcquireEnvironmentDepthCpuImage(out XRCpuImage image))
            {
                using (image)
                {
                    // Use the texture.
                    UpdateRawImage(_rawImage, image);
                }
            }

            UpdateTexts();
        }
// Get the texture associated with the UI.RawImage that we wish to display on screen.
            Texture2D texture = rawImage.texture as Texture2D;

            // If the texture hasn't yet been created, or if its dimensions have changed, (re)create the texture.
            // Note: Although texture dimensions do not normally change frame-to-frame, they can change in response to
            //    a change in the camera resolution (for camera images) or changes to the quality of the human depth
            //    and human stencil buffers.
            if (texture == null || texture.width != cpuImage.width || texture.height != cpuImage.height)
            {
                texture = new Texture2D(cpuImage.width, cpuImage.height, cpuImage.format.AsTextureFormat(), false);
                rawImage.texture = texture;
            }

            // For display, we need to mirror about the vertical access.
            XRCpuImage.ConversionParams conversionParams = new XRCpuImage.ConversionParams(cpuImage, cpuImage.format.AsTextureFormat(), XRCpuImage.Transformation.MirrorY);

            //Debug.Log("Texture format: " + cpuImage.format.AsTextureFormat()); -> RFloat

            // Get the Texture2D's underlying pixel buffer.
            NativeArray<byte> rawTextureData = texture.GetRawTextureData<byte>();

            // Make sure the destination buffer is large enough to hold the converted data (they should be the same size)
            Debug.Assert(rawTextureData.Length == cpuImage.GetConvertedDataSize(conversionParams.outputDimensions, conversionParams.outputFormat),
                "The Texture2D is not the same size as the converted data.");

            // Perform the conversion.
            cpuImage.Convert(conversionParams, rawTextureData);

            // "Apply" the new pixel data to the Texture2D.
            texture.Apply();

Any information is appreciated. If it's not possible through the front camera using the occlusion manager, what else should we use? We did find a post somewhere(This one) that posted a couple of scripts and a .mm file that allows us to get the depth, but it is lacking the options we want like temporal smoothing etc.

Thanks in advance!

andyb-unity commented 2 years ago

AR Foundation only supports depth images for LiDAR cameras (none of which are front-facing). https://docs.unity3d.com/Packages/com.unity.xr.arkit@4.2/manual/arkit-occlusion.html#requirements