Open arianaa30 opened 11 months ago
Hi arianaa30 ,
Here are some pointers:
Ok thanks. Looking at the demo, looks like for processing live preview frame from Camera1 API, VisionProcessorBase uses processByteBuffer()
while in CameraX it defines processImageProxy()
. Now that I have Camera 2, I have to define a new function right? Which of the two do you think can I reuse with minimum changes?
@arianaa30 hey I'm also trying to integrate pose detection with my app which uses camera2 api if you've found the solution to handle bufferQueueProducer error, please share your insights and help me perform in my app. It would be really appreciated
I have an Android camera app that uses Camera2 API and sends frames to a server. I recently did some work on top of MLKit's segmentation example code that generates masks and graphics overlay, etc. Now I want to integrate this MLKit demo into our app, but not sure how to integrate these two together.
Given my implementations are in segmentation example code, so my main question is how I can exactly call these functions in the demo from inside our camera app. I assume I should pass some variables like frames from our camera app to this MLKit demo and start processing them, and return something. Are these compatible together, given our app uses Camera2 API?
For example, I should first initialize a new
VisionImageProcessor imageProcessor
. Then just callprocessByteBuffer()
or maybeprocessBitmap()
? But we have graphicOverlay. What exactly these steps look like in my case? And is it best to pass frames as byteBuffer or media.Image or what for best performance?Any tips or guidance on integration and tips would be appreciated.