wtct-hungary / UnityVision-iOS

This native plugin enables Unity to take advantage of specific features of Core-ML and Vision Framework on the iOS platform.
MIT License
133 stars 26 forks source link

Plugin crashes randomly if ARKitExample is used together with ARKitFaceTrackingConfiguration (Partially solved) #4

Closed borderlineinteractive closed 5 years ago

borderlineinteractive commented 6 years ago

Hi, thanks for this nice plugin.

I just tried to use your ARKitExample code with a ARKitFaceTrackingConfiguration. It basically seems to work and I also initially get the right image classification from the front cam, however, after a short while the app crashes with an EXC_BAD_ACCESS exception.

More precisely, in:

int _vision_evaluateWithBuffer(CVPixelBufferRef buffer) {

    // In case of invalid buffer ref
    if (!buffer) return 0;

    // Forward message to the swift api
    return [[VisionNative shared] evaluateWithBuffer: buffer] ? 1 : 0;
}

the line return [[VisionNative shared] evaluateWithBuffer: buffer] ? 1 : 0;

produces this error:

Thread 1: EXC_BAD_ACCESS (code=1, address=0x94a2994c0)

Any suggestions?

Thanks a lot in advance!

Update:

additional testing shows that its not the RKitFaceTrackingConfiguration, but another plugin that ran in the background performing Voice recognition. Was quite surprising, as these two things don't seem to have anything to do with each other. I will contact the developer of the Voice recognition plugin.

adamhegedues commented 6 years ago

Hi there,

It seems to me that the CVPixelBuffer ARKit exposes gets deallocated before it arrives to the native vision plugin. Remember, The Unity ARKit plugin has no idea that UnityVision is reading its image buffer, so if Unity made changes how and when the buffer for the front facing camera gets released, that may cause problems for UnityVision.

There are ways in UnityVision to allocate a buffer that it owns. In the ARKitExample, UnityVision steals the buffer reference from ARKit, because this way we save a copy. But in the webcam example, an independent metal texture is used for classification. Also, I have written a managed wrapper for CVPixelBuffer. You can check it out in the core video example.

I may implement a way to duplicate native CVPixelBuffers, so you don't have to rely on the WebCamTexture API when you also use ARKit. You would then just make a copy of ARKit's buffer and its guaranteed that it won't get released before time. I just need to allocate some time for this. Meanwhile, you can use a webcam texture.

Please keep me updated!

borderlineinteractive commented 6 years ago

Thanks a lot! This clarifies what is going on and I should be able to fix this.


Priv.-Doz. Dr. Leif Dehmelt Max-Planck-Institute of Molecular Physiology and Technical University Dortmund Department for Systemic Cell Biology Room CP-02-157 Otto-Hahn-Str. 4a

44227 Dortmund

Tel.: ++49-231-755-7057 Fax: ++49-231-133-2299

Website: http://www.mpi-dortmund.mpg.de/61312/Dehmelt

Am 19.11.2018 um 18:00 schrieb Adam Hegedus notifications@github.com:

Hi there,

It seems to me that the CVPixelBuffer ARKit exposes gets deallocated before it arrives to the native vision plugin. Remember, The Unity ARKit plugin has no idea that UnityVision is reading its image buffer, so if Unity made changes how and when the buffer for the front facing camera gets released, that may cause problems for UnityVision.

There are ways in UnityVision to allocate a buffer that it owns. In the ARKitExample, UnityVision steals the buffer reference from ARKit, because this way we save a copy. But in the webcam example, an independent metal texture is used for classification. Also, I have written a managed wrapper for CVPixelBuffer. You can check it out in the core video example.

I may implement a way to duplicate native CVPixelBuffers, so you don't have to rely on the WebCamTexture API when you also use ARKit. You would then just make a copy of ARKit's buffer and its guaranteed that it won't get released before time. I just need to allocate some time for this. Meanwhile, you can use a webcam texture.

Please keep me updated!

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

borderlineinteractive commented 6 years ago

Hi Adam,

Thanks again for your help. Unfortunately, the WebcamTexture does not seem to work together with ARKit. On the other hand, starting from the unityArCamera object, I don't know how to safely access the cvPixelBufferPtr to generate a persistent copy of the cvPixelBuffer. I would be very helpful, if you could point me towards the right direction.

Best wishes,

Leif

On 19. Nov 2018, at 18:00, Adam Hegedus notifications@github.com wrote:

Hi there,

It seems to me that the CVPixelBuffer ARKit exposes gets deallocated before it arrives to the native vision plugin. Remember, The Unity ARKit plugin has no idea that UnityVision is reading its image buffer, so if Unity made changes how and when the buffer for the front facing camera gets released, that may cause problems for UnityVision.

There are ways in UnityVision to allocate a buffer that it owns. In the ARKitExample, UnityVision steals the buffer reference from ARKit, because this way we save a copy. But in the webcam example, an independent metal texture is used for classification. Also, I have written a managed wrapper for CVPixelBuffer. You can check it out in the core video example.

I may implement a way to duplicate native CVPixelBuffers, so you don't have to rely on the WebCamTexture API when you also use ARKit. You would then just make a copy of ARKit's buffer and its guaranteed that it won't get released before time. I just need to allocate some time for this. Meanwhile, you can use a webcam texture.

Please keep me updated!

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/possiblecee/UnityVision-iOS/issues/4#issuecomment-439966027, or mute the thread https://github.com/notifications/unsubscribe-auth/AYBTB8WOdcehiZlV4CVtVEki84QsBBwbks5uwuPGgaJpZM4YoC8e.

adamhegedues commented 6 years ago

Hi Leif,

I added a new example for ARKit. I hope this approach solves your problem. It uses the texture that Unity's ARKit plugin generates from the capturedImage (CVPixelBuffer) of the current ARFrame. I want you to try this first before having to create a deep copy of the pixel buffer every frame in your client application.

You can find it in ARKitExample2.cs. Try it out and let me know if it works for you.

Best regards, Adam

borderlineinteractive commented 6 years ago

Hi Adam,

Works very nicely! Thanks a lot for your help!

Best wishes,

Leif

On 23. Nov 2018, at 13:06, Adam Hegedus notifications@github.com wrote:

Hi Leif,

I added a new example for ARKit. I hope this approach solves your problem. It uses the texture that Unity's ARKit plugin generates from the capturedImage (CVPixelBuffer) of the current ARFrame. I want you to try this first before having to create a deep copy of the pixel buffer every frame in your client application.

You can find it in ARKitExample2.cs. Try it out and let me know if it works for you.

Best regards, Adam

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/possiblecee/UnityVision-iOS/issues/4#issuecomment-441225042, or mute the thread https://github.com/notifications/unsubscribe-auth/AYBTB0yIRcH0QeyKPUUjV4LrciFUHw30ks5ux-TXgaJpZM4YoC8e.

Ilesh commented 5 years ago

Hi Adam,

I am working on the measurement application like the apple's Measure App. I have tried with ARKit but not getting any satisfying result. After looking at your demo I think it's useful for my application. Please find below link for my requirements.

Edge-detection-corner-detection-in-AR

I am waiting for your acknowledgement.

Thanks and regards, Ilesh

adamhegedues commented 5 years ago

Hi Ilesh,

This plugin features rectangle detection which only gives you results reliably if a rectangular object is visible in full. You can try implementing your app with the current state of the plugin, but I don't guarantee anything. Also, since you don't need image classification, you'll probably need to strip out that feature from the plugin because the mlmodel is over a 100 MBs.

If I were you I'd probably look for a native edge detection solution for iOS and use this plugin as an example how to integrate it with Unity.

Best regards, Adam