Unity-Technologies / arfoundation-samples

Example content for Unity projects based on AR Foundation
Other
3.03k stars 1.13k forks source link

"Synchronously convert to grayscale and color" memory overrun on iOS #505

Closed FrankSpalteholz closed 4 years ago

FrankSpalteholz commented 4 years ago

Hello, i'm trying to get a cropped region of the captured images of the iPhone's front cam using the "Synchronously convert to grayscale and color" example from this resource https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@3.1/manual/cpu-camera-image.html ... it works but i'm facing memory issues because it seams that for some reason the Disposal() function doesn't "work" and the memory-usage grows quite fast (checking xcode's debug-navigator) so the app crashes after a while. Ofcourse i was first trying the "TestCameraImage" - example. This example works well and without the memory-issue. So all i did was adding/changing the custom conversionParams-struct (adding those crop-region-definitions) and this specific buffer-object (which will be disposed in the end of my code too). My purpose is to preprocess the raw-image-data as fast as possible and pass it later to some OpenCV-functions. I'd really appreciate some help.

Thank you very much Frank

tdmowrer commented 4 years ago

it works but i'm facing memory issues because it seams that for some reason the Disposal() function doesn't "work" and the memory-usage grows quite fast

Disposing the camera image releases a native camera resource (not memory). If it didn't "work", then you would run out of resources in a few frames and the background rendering would stop working.

The camera image sample writes directly to a Texture2D's memory; it doesn't allocate any memory, so there is nothing for it to leak. In your implementation, are you allocating a NativeArray or other memory? Could you post a snippet of how you're using it?

FrankSpalteholz commented 4 years ago

public class Capture : MonoBehaviour {

[SerializeField]
ARCameraManager m_CameraManager;

[SerializeField]
RawImage m_RawImage;

Texture2D m_Texture;

public ARCameraManager cameraManager
{
    get { return m_CameraManager; }
    set { m_CameraManager = value; }
}

public RawImage rawImage
{
    get { return m_RawImage; }
    set { m_RawImage = value; }
}

void OnEnable()
{
    if (m_CameraManager != null)
    {
        m_CameraManager.frameReceived += OnCameraFrameReceived;
    }
}

void OnDisable()
{
    if (m_CameraManager != null)
    {
        m_CameraManager.frameReceived -= OnCameraFrameReceived;
    }
}

unsafe void OnCameraFrameReceived(ARCameraFrameEventArgs eventArgs)
{

    XRCameraImage image;
    if (!cameraManager.TryGetLatestImage(out image))
    {
        return;
    }

    var format = TextureFormat.RGBA32;

    m_Texture = new Texture2D(300, 300, format, false);

    var conversionParams = new XRCameraImageConversionParams
    {
        // just hardcoded test-area 
        inputRect = new RectInt(100, 100, 300, 300),

        outputDimensions = new Vector2Int(300, 300),

        outputFormat = format,

        transformation = CameraImageTransformation.MirrorY
    };

    int size = image.GetConvertedDataSize(conversionParams);

    var buffer = new NativeArray<byte>(size, Allocator.Temp);

    try
    {
        image.Convert(conversionParams, new IntPtr(buffer.GetUnsafePtr()), buffer.Length);
    }
    finally
    { 
        image.Dispose();
    }

    m_Texture = new Texture2D(
        conversionParams.outputDimensions.x,
        conversionParams.outputDimensions.y,
        conversionParams.outputFormat,
        false);

    m_Texture.LoadRawTextureData(buffer);
    m_Texture.Apply();

    buffer.Dispose();
    m_RawImage.texture = m_Texture;
}

Hello Tim, thanks for your fast reply! As you can see this snippet is essentially more or less exactly the provided example which again is working! Its just that memory-overrun which is hard to debug (or i just don't know how to figure out what the reason is).

add: its the same issue when i provide the whole image instead of the cropping area, so maybe its indeed due to the buffer/native array.

tdmowrer commented 4 years ago

If buffer is not disposed, then that could cause a pretty good sized memory leak. Are there any exceptions thrown between creating buffer and buffer.Dispose?

A way to guard against this in general is to wrap your code in a using statement, e.g.,

using (var buffer = new NativeArray<byte>(size, Allocator.Temp))
{
    // use the buffer here
}
FrankSpalteholz commented 4 years ago

Tim! Thank you so much for getting back to me. In the meantime i've choosen the working example "TestCameraImage" and put the image-data directly to a OpenCV matrix. Works quite well. But i'll let you know about possible exceptions with your suggestion within the next days. Would be good to figure this out for future-projects for sure. If you're interested check this out. That's the prototype i'm currently working on. https://vimeo.com/429245480

tdmowrer commented 4 years ago

Looks pretty good! Curious what you need OpenCV for -- the features showed in your video are all provided by ARKit already.

amir989 commented 4 years ago

The video is really cool. ARKit is great, But its not as accurate as other libraries. For instance, IOS native face landmark is much more accurate than the ARKit face mesh. specially in eyebrows. and its less heavier in terms of processing than the face mesh. Using ARKit would close the access to other camera features such tap to focus or changing camera's exposure and other camera settings! So, for those reasons, sometimes we need to use other libraries than ARKit or ARCore.