wtct-hungary / UnityVision-iOS

This native plugin enables Unity to take advantage of specific features of Core-ML and Vision Framework on the iOS platform.
MIT License
133 stars 26 forks source link

Performance control #5

Closed oliverellmers closed 5 years ago

oliverellmers commented 5 years ago

Hello,

This is a very loose / broad question -

From a high level - I was wondering if there are any ways of increasing performance?

Thanks in advance!

Oliver

adamhegedues commented 5 years ago

Hi,

This plugin does not block the main thread while evaluating image buffers. The default behaviour is that it performs as many evaluations as it possibly can, which is 10-15 per second on an iPhone 7, using InceptionV3. If you need more frequent observation results, you need to use a lighter ML model, or try to send a smaller image buffer to the plugin. CoreML transforms and scales your input image before analysis. Learn about the input requirements of the ML model you need to use, and provide an appropriate source image, so less operations will be performed during preparation. Also, avoid creating a new (managed) CVPixelBuffer every time you run your evaluation. That API is not intended for real-time use. Provide a metal texture or a native buffer ptr (from the ARKit plugin) instead.

oliverellmers commented 5 years ago

Great thanks for the response!

Will look into the points you have discussed