Open mrousavy opened 1 year ago
Is OpenGL even the right tool for the job? It feels like setting up a rendering context etc is a large overhead, especially because I am now working in RGB instead of YUV/PRIVATE.
It should be ok to use OpenGL for "image processing" and then draw the result into the input Surface
of a MediaCodec
or MediaRecoder
.
BUT It's not so efficient to retrieve 'RGB' values by calling glReadPixels
. glReadPixels
is a blocking operation, I recommend you use some other async approaches instead, e.g.
ImageReader
instance with its format set to RGB_888
Surface
of this ImageReader
Surface
OnImageAvaiable
callback once the draw
operation is completedSee androidx for more details.
Note: If you want to get YUV instead of RGB, here are two choices for you:
RGB->YUV
conversion in your fragment shader that is used in step 3
(It's your responsibility to determine how YUV values are stored, RGB888
should always be the most suitable format of ImageReader
for this case)GL_EXT_YUV_target
extension in your fragment shader that is used in step 3
(this extension ONLY supports YUV444
, it is compatible with RGB888
the format of ImageReader
used in this case)Thanks for your reply!
I'm currently going through an ImageReader -> ImageWriter setup (USAGE_GPU_SAMPLED_IMAGE
), here's the code for that: https://github.com/mrousavy/react-native-vision-camera/blob/e845dc8397d2f53804cd755b7f73e6747a916bbb/package/android/src/main/java/com/mrousavy/camera/core/VideoPipeline.kt#L84-L113
This seems a bit unstable and some devices had problems with that, but to confirm; that'd be the recommended approach to get access to the raw Camera frames before passing them along to a MediaRecorder, right?
Detaching an Image
from one BufferQueue
and then attaching the Image
to another BufferQueue
is the easiest way on Android to forward an Image
to other "consumers".
The underlying pixel format that is chosen for this special use case (PRIVATE format combined with USAGE_GPU_SAMPLED_IMAGE usage) is implementation-defined and depends on the contract between Gralloc and GL driver. In most cases, though, the pixel format should be YUV_420_888
, you still need to confirm it if you want to read pixels and then process them on CPU.
About the combination USAGE_GPU_SAMPLED_IMAGE | USAGE_CPU_READ_OFTEN
, I also don't know whether it is guaranteed to be supported by Qcom, MTK, and so on. Sorry :(
Well, that sucks. Is there no alternative to this? I guess it's just gonna work on some phones, and won't work on others?
Hey all!
I have a Camera library that can do preview, photo capture, video capture, and frame processing at the same time. On iOS, this works perfectly. But on Android, it actually seems to be impossible to do this with Camera2/android.media APIs.
This is my structure:
Important detail: The
VideoPipeline
would do Frame Processing/ImageAnalysis and Video Recording in one, aka synchronous.I need to support
YUV_420_888
,PRIVATE
andRGBA_8888
as pixel formats for the MLKit image processor.Is it possible to start off with PRIVATE/YUV/RGB frames, then later pass it to OpenGL for RGB processing/rendering?
Or is there a way to receive PRIVATE/YUV frame data from OpenGL? I'm only aware of glReadPixels, which reads RGB.
I guess my main question is: Is OpenGL even the right tool for the job? It feels like setting up a rendering context etc is a large overhead, especially because I am now working in RGB instead of YUV/PRIVATE.