Closed patolax closed 3 years ago
That is right, the performance for CameraX API has some gap comparing to Camera1 in some ML Kit features. We are actively working on the optimization for CameraX image processing utils, and will release the change soon in the future.
In our sample app, we convert the MediaImage of CameraX into Bitmap for rendering purpose. If you only want to detect the face, you could construct the InputImage with the MediaImage, and send it to FaceDetector. It will have much better performance.
Thanks. I want the Bitmap as well to sync the detector output with the input.
After many trial and error, I improved performance using https://github.com/xizhang/camerax-gpuimage/blob/master/app/src/main/java/com/appinmotion/gpuimage/YuvToRgbConverter.kt
I pass the yuv byte array to the InputImage. This improves FPS significantly while giving access to the bitmap.
Please let us know when you improve the performance of CameraX.
Hi patolax,
Want to double check on your improvements. Did you just replace the ML Kit sample implementation of the yuv420ThreePlanesToNV21 method to the one in YuvToRgbConverter?
One of the major difference I found between these two implementation is that the ML Kit one tries to do some optimization by checking whether the UV planes are already in good format. However, if for your device, the format is not like that, it actually always pay an extra cost for that checking, and will be slower than do the pixel-by-pixel conversion directly.
If you have time, could you try to add some log to check the areUVPlanesNV21's return value to see if it is always false?
Thanks,
yup its false. Even with RenderScript approach performance of CameraX is 10 to 15 FPS below camera 2 API throughput. There is clearly something wrong with such poor performance.
yup its false. Even with RenderScript approach performance of CameraX is 10 to 15 FPS below camera 2 API throughput. There is clearly something wrong with such poor performance.
Hi, if you need to use Bitmap, have you tried to pass the android media image to construct InputImage then do detection while kicking off another thread to convert the image to Bitmap? This may save you some time.
The conversion from YUV image to bitmap does take longer than bytearray/bytebuffer. To make the app using camerax output more efficient, you can implement the logic to draw on top of the Camera preview with OpenGL directly. There might be examples which you can search around.
Has anybody found a solution to this issue?
First of all thanks for the amazing sample.
I have changed face detection settings to just show just the bounding box (no contours or landmarks).
CameraXLivePreviewActivity, I have set image
ImageAnalysis.setTargetResolution(new Size(320, 480))
So that both LivePreviewActivity and CameraXLivePreviewActivity feed the same image size to the detector.FPS on the same phone
What's the main reason for this poor performance and how can I improve the performance?
I used the old codebase which was based on LivePreviewActivity , showed very good performance, but upon releasing the app people started to complain the camera is not working (I assumed that it's because of the deprecated camera API). I would like to use CameraXLivePreviewActivity , but the performance has to be improved to be used in my application.
Edit: I did some profiling, the image conversion logic has significantly differnce performnce in two cases. processImageProxy() methods code bitmap = BitmapUtils.getBitmap(image); takes 73 ms processImage() methods code bitmap = BitmapUtils.getBitmap(data, frameMetadata) takes only 25ms Can we use renderscript or some other efficient way?