Open kiah2008 opened 1 year ago
最近發現了一個神奇的OPENGL擴充套件:GL_EXT_YUV_target 現在我們研究下.
Overview
This extension adds support for three new YUV related items: first rendering to YUV images, second sampling from YUV images while keeping the data in YUV space, third it defines a new built in function that does conversion from RGB to YUV with controls to choose ITU-R BT.601-7, ITU-R BT.601-7 Full range (JFIF images), or ITU-R BT.709-5 standard. 1.2 GL_EXT_YUV_target能實現的功能 通過khronos官網描述,可以知道該擴充套件能實現的功能有三個:
渲染至YUV 格式的影象 對YUV影象進行採用並保持YUV格式 新增了一些內建變數,能夠實現YUV和RGB格式的相互轉換 這個三個功能都能實現哪些具體功能呢?我們來舉個例子。
目前 Android Camera系統幀資料大部分是以YUV(如NV21)格式儲存的,而GL_EXT_YUV_target正好又能渲染至YUV格式的影象上,也就是說可以直接通過該擴充套件使用OpenGLES對Camera幀資料進行二次處理,在Camera幀資料上渲染上一些非常炫酷的物體。 這也是本文的終極目標。
下邊我們開始逐個功能研究
To perform the YUV rendering capability in this extension an application will attach a texture to the framebuffer object as the color attachment. If the texture has a target type of TEXTURE_EXTERNAL_OES with YUV color format then the GL driver can use this framebuffer object as the render target, TEXTURE_EXTERNAL_OES target with RGB color format are not allowed with this extension.
通過上述描述我們可以理解到可以將一個YUV格式的TEXTURE_EXTERNAL_OES 紋理繫結到framebuffer object。GL就可以使用該framebuffer object作為渲染目標進行渲染。
在khronos opengl extension官網後邊的描述中,我們知道在openges 3.0 spec中描述了可以將TEXTURE_EXTERNAL_OES 紋理邦迪到framebuffer object上,描述如下:
from "If texture is not zero, then texture must either name an existing two dimensional texture object and textarget must be TEXTURE_2D or texture must name an existing cube map...."
to "If texture is not zero, then texture must either name an existing two dimensional texture object and textarget must be TEXTURE_2D or TEXTURE_EXTERNAL_OES or texture must name an existing cube map...." 2.1 framebuffer object的建立 我們介紹下將TEXTURE_EXTERNAL_OES 繫結到framebuffer object的實現程式碼:
sp<GraphicBuffer> dstTexBuffer;
GLuint dstTex;
GLuint gFbo;
//建立紋理然後將img1繫結到該紋理上
glGenTextures(1, &dstTex);
checkGlError("glGenTextures");
glBindTexture(GL_TEXTURE_EXTERNAL_OES, dstTex);
checkGlError("glBindTexture");
glEGLImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES,
(GLeglImageOES)img1);
checkGlError("glEGLImageTargetTexture2DOES");
//建立Framebuffer Object並將紋理dstTex繫結到color attachment0上
glGenFramebuffers(1, &gFbo);
glBindFramebuffer(GL_FRAMEBUFFER, gFbo);
//將dstTex繫結到Framebuffer Object的colorattachment0 上
//這裡需要注意第二個引數只能為GL_COLOR_ATTACHMENT0
//第三個引數為GL_TEXTURE_EXTERNAL_OES
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_EXTERNAL_OES, dstTex, 0);
//檢查Framebuffer object的有效性
glCheckFramebufferStatus(GL_FRAMEBUFFER);
checkEglError("glCheckFramebufferStatus");
2.2 framebuffer object的使用 我們首先介紹下一個最簡單的使用場景,glClearColor 在 framebuffer object clear上我們期望的顏色。
void renderFrame() {
glBindFramebuffer(GL_FRAMEBUFFER, gFbo);
//注意在framebuffer 邦迪到了
//HAL_PIXEL_FORMAT_YCrCb_420_SP格式的yuv紋理上時
//glClearColor的第一個引數清理的是Y plant
//第二個引數清理的是UV Plant的U分量
//第三個引數清理的是UV Plant的V分量
//第四個引數無意義
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
checkGlError("glClearColor");
//glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
//需要注意只能清理顏色緩衝區,清理其它顏色緩衝區會報錯
glClear(GL_COLOR_BUFFER_BIT);
checkGlError("glClear");
//儲存dstTexBuffer內容到sdcard中
//檢查glClearColor是否成功
char* buf = NULL;
dstTexBuffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, (void**)(&buf));
dumpImage((unsigned char*)buf,frameid++,yuvTexWidth,yuvTexHeight,1.5);
dstTexBuffer->unlock();
} Khronos官網上關於glClearColor有這樣一段描述:
When clearing YUV Color Buffers, clear color should be defined in yuv color space and so floating point r, g, and b value will be mapped to corresponding y, u and v value and alpha channel will be ignored. The result of clearing integer color buffers with Clear is undefined."
通過上述描述可知, glClearColor的第一個引數清理的是Y 分量 第二個引數清理的是U分量 第三個引數清理的是V分量 第四個引數無意義 當使用glClearColor(1.0f, 0.0f, 0.0f, 1.0f)清理後,得到的結果為 在這裡插入圖片描述
當使用glClearColor(0.0f, 1.0f, 0.0f, 1.0f)清理後,得到的結果為: 在這裡插入圖片描述 當使用glClearColor(0.0f, 0.0f, 1.0f, 1.0f)清理後,得到的結果為: 在這裡插入圖片描述
2.3 渲染物體至framebuffer object 基於2.2小節,我們在framebuffer object上渲染一個正方形
2.3.1 shader建立 需要在fragement shader中開啟GL_EXT_YUV_target擴充套件,程式碼如下:
"#extension GL_EXT_YUV_target : require\n" 如果是渲染至YUV紋理上,還需要新增 layout (yuv)限定符。
khronos官網上是如下這樣描述的
A shader which produces yuv format color output must qualify the fragment shader output variable with new yuv layout qualifier as described below.
layout (yuv) out vec4 color;
The new yuv layout qualifier can't be combined with any other layout qualifier, can only be used with fragment shader outputs and would be available only when the new GLSL extension is specified. Additionally if the shader qualifies fragment shader output with the new yuv qualifier and write depth or multiple color output, it would cause compilation failure. 完整的shader程式碼如下:
vertex shader:
static const char gVertexShader[] = static const char gVertexShader[] = "#version 300 es\n" "in vec4 vPosition;\n"
"void main() {\n"
" gl_Position = vec4(vPosition.x*0.5,vPosition.y*0.5,0.0,1.0);\n"
" //gl_Position = vPosition;\n"
"}\n";
fragement shader: …
static const char gFragmentShader[] = "#version 300 es\n" "#extension GL_EXT_YUV_target : require\n" "precision mediump float;\n" "layout (yuv) out vec3 outColor;\n" "void main() {\n" " outColor = vec3(1.0,0.0,1.0);\n" "}\n"; 使用的頂點
const GLfloat gTriangleVertices[] = { -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, -1.0f, 1.0f, 1.0f, }; 2.3.2 繪製正方形程式碼 繪製程式碼如下:
void renderFrame() { glBindFramebuffer(GL_FRAMEBUFFER, gFbo);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
checkGlError("glClearColor");
glClear(GL_COLOR_BUFFER_BIT);
checkGlError("glClear");
glUseProgram(gProgram);
checkGlError("glUseProgram");
glVertexAttribPointer(gvPositionHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices);
checkGlError("glVertexAttribPointer");
glEnableVertexAttribArray(gvPositionHandle);
checkGlError("glEnableVertexAttribArray");
glUniform1i(gYuvTexSamplerHandle, 0);
checkGlError("glUniform1i");
glBindTexture(GL_TEXTURE_EXTERNAL_OES, yuvTex);
checkGlError("glBindTexture");
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
checkGlError("glDrawArrays");
//glFinish();
//printf("glFinish %d ====.\n",frameid);
char* buf = NULL;
dstTexBuffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, (void**)(&buf));
dumpImage((unsigned char*)buf,frameid++,yuvTexWidth,yuvTexHeight,1.5);
dstTexBuffer->unlock();
} 最終渲染出的結果如下圖: 在這裡插入圖片描述 其中 綠色為clear的顏色,黃色為 渲染的正方形。
A new OpenGL GLSL extension flag is added:
#extension GL_EXT_YUV_target
When the above GLSL extension is specified, one new sampler type will be available for sampling the 2D texture:
__samplerExternal2DY2YEXT
The "__samplerExternal2DY2YEXT" is used to sample a YUV texture image and output color value without any color conversion.
Whenever a YUV sample is output from the sampler, the format of the YUV will be as if YUV 4:4:4 format is output. This also means that the Y sample maps to component R, the U sample maps to component G, the V sample maps to component B, and the component A will be 1.0f.
The RGB sample output will be the same as in OpenGL ES specification.
Here is one example:
uniform __samplerExternal2DY2YEXT u_sTexture;
通過上邊的描述可知,開啟GL_EXT_YUV_target後,新增了一種取樣器__samplerExternal2DY2YEXT,該種取樣器可以實現取樣YUV格式的紋理且不做任何格式轉換。
我這隻附一下一個使用__samplerExternal2DY2YEXT的完整的 fragement shader demo
static const char gFragmentShader[] = "#version 300 es\n" "#extension GL_OES_EGL_image_external_essl3 : require\n" "#extension GL_EXT_YUV_target : require\n" "precision mediump float;\n" "//uniform samplerExternalOES yuvTexSampler;\n" "uniform __samplerExternal2DY2YEXT yuvTexSampler;\n" "in vec2 yuvTexCoords;\n" "out vec4 outColor;\n" "void main() {\n" " vec3 srcYuv = texture(yuvTexSampler, yuvTexCoords).xyz;\n" " outColor = vec4(yuv_2_rgb(srcYuv, itu_601), 1.0);\n" "}\n"; yuvTexSampler取樣的格式為YUV,可以使用內建函數yuv_2_rgb將YUV格式轉換為RGB,這個就可在普通的surface上正常顯示YUV紋理了,上述shader和下邊的效果是一致的
static const char gFragmentShader[] = "#version 300 es\n" "#extension GL_OES_EGL_image_external_essl3 : require\n" "#extension GL_EXT_YUV_target : require\n" "precision mediump float;\n" "uniform samplerExternalOES yuvTexSampler;\n" "//uniform __samplerExternal2DY2YEXT yuvTexSampler;\n" "in vec2 yuvTexCoords;\n" "out vec4 outColor;\n" "void main() {\n" " outColor = texture(yuvTexSampler, yuvTexCoords);\n" "}\n";
New Built-in function
When the new GLSL extension is specified, two new built in functions will be available for rgb to yuv or yuv to rgb color space conversion.
vec3 rgb_2_yuv(vec3 color, yuvCscStandardEXT conv_standard);
The function rgb_2_yuv will apply rgb to yuv color conversion transformation on "color" value using the formula specified as per new type yuvCscStandardEXT variable. The first input parameter supposed to specify rgb value using x, y & z channels of a vec3 variable, correspondingly return value of this function will have transformed y, u and v value in its x, y and z channel. Precision of the input color will define the precision used for color space conversion and for output yuv color value.
vec3 yuv_2_rgb (vec3 color, yuvCscStandardEXT conv_standard);
The function yuv_2_rgb will apply yuv to rgb color conversion transformation on "color" value using the formula specified as per new type yuvCscStandardEXT variable. The first input parameter supposed to specify yuv value using x, y & z channels of a vec3 variable, correspondingly return value of this function will have transformed r, g and b value in its x, y and z channel. Precision of the input color will define the precision used for color space conversion and for output yuv color value. 通過上邊的描述可知,開啟GL_EXT_YUV_target 後,新增了rgb_2_yuv、yuv_2_rgb 兩個內建函數,可以實現YUV-RGB的相互轉換,我們基於第二小節實現一個rgb_2_yuv的小demo。 在第二小節中 fragement shader的輸出為
" outColor = vec3(1.0,0.0,1.0);\n" 最終輸出的黃色的正方形,如果我們期望輸出紅色的矩形,只需要稍微修改下fragement shader即可,程式碼如下:
static const char gFragmentShader[] = "#version 300 es\n" "#extension GL_EXT_YUV_target : require\n" "precision mediump float;\n" "layout (yuv) out vec3 outColor;\n" "void main() {\n" " vec3 red = vec3(1.0,0.0,0.0);\n" " outColor = rgb_2_yuv(red,itu_601);\n" " //outColor = vec3(1.0,0.0,1.0);\n" "}\n"; 最終渲染效果如下圖
在這裡插入圖片描述
當然這兩個內建變數還有一個很重要的用途:影象格式轉換。
將YUV紋理影象轉換為RGB紋理影象,當然這個轉換使用samplerExternalOES 也能實現 將RGB紋理影象轉換為YUV紋理影象 至此完成了GL_EXT_YUV_target的初步學習,下一步計劃,將該EGL 擴充套件應用到Android Camera 系統中,實現直接對相機YUV幀資料進行二次渲染處理,敬請期待!!!
https://github.com/fuyufjh/GraphicBuffer GraphicBuffer Use GraphicBuffer class in Android native code in your project, without compiling with Android source code.
This repository is for APIs 23-27. API 23 is supported without additional tricks, APIs 24-25 need making your application a system application.
APIs 26 and 27 do not need code from this repository since a more convenient alternative is available: HardwareBuffer.
Moreover, this README provides an example of usage of the buffer to obtain a rendered texture image using simple and fast memcpy() calls, both for GraphicBuffer (API <= 23) and HardwareBuffer (API >= 26).
Inspired by tcuAndroidInternals.cpp
How to use The usage is exactly the same with android::GraphicBuffer on API <= 25 or HardwareBuffer on API >= 26. The example below shows a pseudo-code which renders something to a texture attached to a framebuffer and get the result using simple memcpy() calls. Examples for both API >= 26 (HardwareBuffer) and API < 26 (GraphicBuffer) are provided. If something doesn't work, it's worth checking if pointers are valid, if eglGetError() shows no issues, if there are any errors from Android system and also checking return codes with glGetError() if drawing issues occur.
An example for API <= 25 using this repository, GraphicBuffer:
// for EGL calls
// Use code from this repository. Note that define __ANDROID_API__ must be set properly for it to work // Also add -lEGL at link stage
// bind FBO (create FBO my_handle first!) glBindFramebuffer(GL_FRAMEBUFFER, my_handle);
// attach texture to FBO (create texture my_texture first!) glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, my_texture, 0);
// usage for the GraphicBuffer int usage = GraphicBuffer::USAGE_HW_RENDER | GraphicBuffer::USAGE_SW_READ_OFTEN | GraphicBuffer::USAGE_SW_WRITE_NEVER;
// create GraphicBuffer GraphicBuffer* graphicBuf = new GraphicBuffer(width, height, PIXEL_FORMAT_RGBA_8888, usage);
// get the native buffer auto clientBuf = (EGLClientBuffer) graphicBuf->getNativeBuffer();
// obtaining the EGL display EGLDisplay disp = eglGetDisplay(EGL_DEFAULT_DISPLAY);
// specifying the image attributes EGLint eglImageAttributes[] = {EGL_IMAGE_PRESERVED_KHR, EGL_TRUE, EGL_NONE};
// creating an EGL image EGLImageKHR imageEGL = eglCreateImageKHR(disp, EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, clientBuf, eglImageAttributes);
// Doing some OpenGL rendering like glDrawArrays
// Shaders also work, need #extension GL_OES_EGL_image_external : require
// Now the result is inside the FBO my_handle
// binding the OUTPUT texture glBindTexture(GL_TEXTURE_2D, my_texture);
// attaching an EGLImage to OUTPUT texture glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, imageEGL);
// Obtaining the content image:
// pointer for reading and writing texture data void readPtr, writePtr;
// locking the buffer graphicBuf->lock(GraphicBuffer::USAGE_SW_READ_OFTEN, &readPtr);
// setting the write pointer writePtr = <set to a valid memory area, like malloc(_YOURSIZE)>
// obtaining the stride (for me it was always = width) int stride = graphicBuf->getStride();
// loop over texture rows for (int row = 0; row < height; row++) { // copying, 4 = 4 channels RGBA because of the format above memcpy(writePtr, readPtr, width * 4);
// adding stride * 4 to read pointer
readPtr = (void *)(int(readPtr) + stride * 4);
// adding width * 4 to write pointer
writePtr = (void *)(int(writePtr) + width * 4);
}
// NOW data is in writePtr memory
// unlocking the buffer graphicBuf->unlock(); Example for API >= 26. This repository is NOT needed, because there is an open alternative in NDK [1]. The example does exactly the same thing as the one above.
// for EGL calls
// for API >= 26 // Also add -lEGL -lnativewindow -lGLESv3 at link stage
// bind FBO (create FBO my_handle first!) glBindFramebuffer(GL_FRAMEBUFFER, my_handle);
// attach texture to FBO (create texture my_texture first!) glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, my_texture, 0);
// OUR parameters that we will set and give it to AHardwareBuffer AHardwareBuffer_Desc usage;
// filling in the usage for HardwareBuffer usage.format = AHARDWAREBUFFER_FORMAT_R8G8B8A8_UNORM; usage.height = outputHeight; usage.width = outputWidth; usage.layers = 1; usage.rfu0 = 0; usage.rfu1 = 0; usage.stride = 10; usage.usage = AHARDWAREBUFFER_USAGE_CPU_READ_OFTEN | AHARDWAREBUFFER_USAGE_CPU_WRITE_NEVER | AHARDWAREBUFFER_USAGE_GPU_COLOR_OUTPUT;
// create GraphicBuffer AHardwareBuffer* graphicBuf; AHardwareBuffer_allocate(&usage, &graphicBuf); // it's worth to check the return code
// ACTUAL parameters of the AHardwareBuffer which it reports AHardwareBuffer_Desc usage1;
// for stride, see below AHardwareBuffer_describe(graphicBuf, &usage1);
// get the native buffer EGLClientBuffer clientBuf = eglGetNativeClientBufferANDROID(graphicBuf);
// obtaining the EGL display EGLDisplay disp = eglGetDisplay(EGL_DEFAULT_DISPLAY);
// specifying the image attributes EGLint eglImageAttributes[] = {EGL_IMAGE_PRESERVED_KHR, EGL_TRUE, EGL_NONE};
// creating an EGL image EGLImageKHR imageEGL = eglCreateImageKHR(disp, EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, clientBuf, eglImageAttributes); /**
// attaching an EGLImage to OUTPUT texture glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, imageEGL); /**
#extension GL_OES_EGL_image_external_essl3 : require
// Now the result is inside the FBO my_handle// Obtaining the content image:
// pointer for reading and writing texture data void readPtr, writePtr; /**
// setting the write pointer writePtr = <set to a valid memory area, like malloc(_YOURSIZE)>
// obtaining the stride (for me it was always = width) int stride = usage1.stride;
// loop over texture rows for (int row = 0; row < height; row++) { // copying, 4 = 4 channels RGBA because of the format above memcpy(writePtr, readPtr, width * 4);
// adding stride * 4 to read pointer
readPtr = (void *)(int(readPtr) + stride * 4);
// adding width * 4 to write pointer
writePtr = (void *)(int(writePtr) + width * 4);
}
// NOW data is in writePtr memory
// unlocking the buffer AHardwareBuffer_unlock(graphicBuf, nullptr); // worth checking return code How to access private libraries on API 24-25 On API 26, there is a public HardwareBuffer [1] option which replaces GraphicBuffer hacks. On API <= 23 the hack from the repo worked because the access to private libraries such as libui.so was allowed.
It's still allowed [2] in 24-25, however, libui.so also requires gralloc.exynos5.so (see full list of its dependencies [3]) which is not allowed to use on API 24-25. The app is killed when trying to dlopen libui.so (on new GraphicBuffer()).
It seems that there is a solution for API <= 23 and for API >= 26, but on 24 and 25 it seems that it's impossible to use any kind of GraphicBuffer-like access.
The solution for API 24-25, along with using code from this repository, is to make your application a system application. It requires root privileges. The process is described in https://stackoverflow.com/questions/24641604/qt-application-as-system-app-on-android for Qt-based apps.
How to tweak API The API 23 version in https://github.com/fuyufjh/GraphicBuffer/blob/fa346e1f6266a717758d32aee9c75c85da8a7263/GraphicBuffer.cpp uses the _ZN7android13GraphicBufferC1Ejjij constructor symbol, which was replaced by _ZN7android13GraphicBufferC1EjjijNSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEE in API 24-25 (a std::string argument added at the end). This repository works for the API 24-25 as well.
Since I'm not sure if any other APIs have different constructors, below you can find directions on how to tweak the code for your API.
Copy your file /system/lib/libui.so from your Android device to your PC. This is the file that contains symbol names for GraphicBuffer. Using Android NDK's nm for your architecture, run:
$ /somewhere/android-ndk/find-it/arm-linux-androideabi-gcc-nm -C -D libui.so | grep GraphicBuffer | sort It will produce output similar to this:
https://cloud.tencent.com/developer/article/1739511
glReadPixels
glReadPixels 是 OpenGL ES 的 API ,OpenGL ES 2.0 和 3.0 均支持。使用非常方便,下面一行代码即可搞定,但是效率很低。
glReadPixels(0, 0, outImage.width, outImage.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer); 复制 当调用 glReadPixels 时,首先会影响 CPU 时钟周期,同时 GPU 会等待当前帧绘制完成,读取像素完成之后,才开始下一帧的计算,造成渲染管线停滞。
值得注意的是 glReadPixels 读取的是当前绑定 FBO 的颜色缓冲区图像,所以当使用多个 FBO(帧缓冲区对象)时,需要确定好我们要读那个 FBO 的颜色缓冲区。
glReadPixels 性能瓶颈一般出现在大分辨率图像的读取,所以目前通用的优化方法是在 shader 中将处理完成的 RGBA 转成 YUV (一般是 YUYV 格式),然后基于 RGBA 的格式读出 YUV 图像,这样传输数据量会降低一半,性能提升明显。
PBO
PBO (Pixel Buffer Object)是 OpenGL ES 3.0 的概念,称为像素缓冲区对象,主要被用于异步像素传输操作。PBO 仅用于执行像素传输,不连接到纹理,且与 FBO (帧缓冲区对象)无关。
PBO 类似于 VBO(顶点缓冲区对象),PBO 开辟的也是 GPU 缓存,而存储的是图像数据。
PBO 可以在 GPU 的缓存间快速传递像素数据,不影响 CPU 时钟周期,除此之外,PBO 还支持异步传输。
PBO 类似于“以空间换时间”策略,在使用一个 PBO 的情况下,性能无法有效地提升,通常需要多个 PBO 交替配合使用。
2 个 PBO read pixels
如上图所示,利用 2 个 PBO 从帧缓冲区读回图像数据,使用 glReadPixels 通知 GPU 将图像数据从帧缓冲区读回到 PBO1 中,同时 CPU 可以直接处理 PBO2 中的图像数据。
关于 PBO 的详细使用可以参考文章:OpenGL ES 3.0 开发连载(22):PBO , 这里不再赘述。
ImageReader
ImageReader 是 Android SDK 提供的 Java 层对象,其内部会创建一个 Surface 对象。
常用于 Android Camera2.0 相机预览,通过 addTarget 将 Surface 对象作为相机预览图像的输出载体,通过回调接口获取预览图像。
mImageReader = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.YUV_420_888, 2); mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler); mSurface = mImageReader.getSurface();
private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() { @Override public void onImageAvailable(ImageReader reader) { Image image = reader.acquireLatestImage(); if (image != null) { //处理相机预览图像 image image.close(); } } }; 复制 那么 ImageReader 怎么跟 OpenGL ES 结合使用呢?
我们知道利用 EGL 创建 OpenGL 上下文环境时,eglCreateWindowSurface 需要传入 ANativeWindow 对象,而 ANativeWindow 又基于 Surface 对象创建的。
那我们可以利用 ImageReader 对象的 Surface 对象作为 OpenGL 展示渲染结果的 Window Surface ,每次渲染的结果可以通过 ImageReader 对象的回调获取。
HardwareBuffer
HardwareBuffer 是一个更底层的对象,代表可由各种硬件单元访问的缓冲区。
特别地,HardwareBuffer 可以映射到各种硬件系统的存储器,例如 GPU 、 传感器或上下文集线器或其他辅助处理单元。
HardwareBuffer 是 Android 8 API >= 26 提供的用于替换 GraphicBuffer 的接口,在 API <= 25 时可以使用 GraphicBuffer 。
两者在使用步骤上基本一致,均可以用于快速读取显存(纹理)图像数据,但是 HardwareBuffer 还可以访问其他硬件的存储器,使用更广泛。
Android 在 Native 层和 Java 层均提供了 HardwareBuffer 实现接口,其中 Native 层叫 AHardwareBuffer 。
AHardwareBuffer 读取显存(纹理)图像数据时,需要与 GLEXT 和 EGLEXT 配合使用 。
主要步骤:首先需要创建 AHardwareBuffer 和 EGLImageKHR 对象,然后将目标纹理(FBO 的颜色附着)与 EGLImageKHR 对象绑定,渲染结束之后便可以读取纹理图像。
HardwareBuffer 读取纹理图像数据:
unsigned char *ptrReader = nullptr; AHardwareBuffer_lock(m_HwBuffer, AHARDWAREBUFFER_USAGE_CPU_READ_OFTEN, -1, nullptr,(void *) &ptrReader); memcpy(dstBuffer, ptrReader, imgWidth imgHeight * 3 / 2);//直接可以读取 YUV 图像(NV21) int32_t fence = -1; AHardwareBuffer_unlock(m_PHwBuffer, &fence); 复制 另外,HardwareBuffer 支持直接读取纹理中的 YUV (YUV420)格式的图像,只需要在 shader 中实现 RGB 到 YUV 的格式转换。
GLES 3.0 YUV 扩展直接支持 RGB 到 YUV 的转换:
precision mediump float; in vec2 v_texCoord; layout(yuv) out vec4 outColor; uniform sampler2D s_texture; void main() { //色彩空间标准公式 yuvCscStandardEXT conv_standard = itu_601_full_range; vec4 rgbaColor = texture(s_texture, v_texCoord); //dealwith rgba vec3 rgbColor = rgbaColor.rgb; vec3 yuv = rgb_2_yuv(rgbColor, conv_standard);//实现 RGB 到 YUV 的格式转换 outColor = vec4(yuv, 1.0); } 复制 HardwareBuffer 和 GraphicBuffer 具体使用可以参考:https://github.com/fuyufjh/GraphicBuffer
实测性能对比
通过在 SDM8150手机上,对比读取相同格式 3k 左右分辨率图像的性能,其中 ImageReader、 PBO 和 HardwareBuffer 明显优于 glReadPixels 方式。
HardwareBuffer、 ImageReader 以及 PBO 三种方式性能相差不大,但是理论上 HardwareBuffer 性能最优。
四种方式中,glReadPixels 使用最方便,HardwareBuffer 实现最复杂,实现复杂度:HardwareBuffer > PBO > ImageReader > glReadPixels 。
结合实测性能和实现难度,Native 层建议选择 PBO 方式,超大分辨率建议尝试 HardwareBuffer 方式,Java 层建议使用 ImageReader 方式。
https://tw511.com/a/01/14066.html