Open thisum opened 8 years ago
Hi,
I don't use Google Vision API yet and this is just my assuming.
Do you call ByteBuffer#clear
before setImageData
within IFrameCallback
? IFrameCallback#onFrame
is called from native library and argument ByteBuffer is direct ByteBuffer and some fields do not set correctly.
And you also needs to call one of setPreviewDisplay
or setPreviewTexture
with setFrameCallback
.
You can also use setBitmap
with How to create Bitmap using IFrameCallback
saki
I'm trying to integrate the output of the UVCCamera to google vision API(android). My plan is analyse one frame at a time, in order to image analysis on those images(BarCode reading) So the plan is to use
IFrameCallback
and analyse frames(only 3 frames per second). According to google API, I can use only NV16, NV21, or YV12 formats. So to use the image analysing method, I'm using the following method:public Frame.Builder setImageData (ByteBuffer data, int width, int height, int format)
Hence I used
mUVCCamera.setFrameCallback(mIFrameCallback, UVCCamera.PIXEL_FORMAT_NV21)
to setup the camera andImageFormat.NV21
to create the frame object. But I'm not getting any results.Could you please let me know, have I used the correct method to get a ByteBuffer in the required format? Or do I need to do some more changes in camera parameters? And also whether the usage of
IFrameCallback
ok?