Closed GoogleCodeExporter closed 9 years ago
Where did your Method 4 code come from?
What is that color space?
Could this be the rounding issue as:
https://code.google.com/p/libyuv/issues/detail?id=324
The conversion is intended to be bt.601 in the video constrained range of 16 to
240 for Y.
The formula you show doesnt have Y - 16, so I'm thinking its not the same color
space? The other two are bt.709 and bt.720, intended for LCDs and HDTVs. And
full range varients, such as jpegs, which libyuv has some support for.
Could it be using the JPeg color space? Theres an open issue to implement that:
https://code.google.com/p/libyuv/issues/detail?id=241
To experiment with changing the code, disable Neon and tinker with
source/row_common.cc:
// C reference code that mimics the YUV assembly.
#define YG 74 /* (int8)(1.164 * 64 + 0.5) */
#define UB 127 /* min(63,(int8)(2.018 * 64)) */
#define UG -25 /* (int8)(-0.391 * 64 - 0.5) */
#define UR 0
#define VB 0
#define VG -52 /* (int8)(-0.813 * 64 - 0.5) */
#define VR 102 /* (int8)(1.596 * 64 + 0.5) */
// Bias
#define BB UB * 128 + VB * 128
#define BG UG * 128 + VG * 128
#define BR UR * 128 + VR * 128
static __inline void YuvPixel(uint8 y, uint8 u, uint8 v,
uint8* b, uint8* g, uint8* r) {
int32 y1 = ((int32)(y) - 16) * YG;
*b = Clamp((int32)((u * UB + v * VB) - (BB) + y1) >> 6);
*g = Clamp((int32)((u * UG + v * VG) - (BG) + y1) >> 6);
*r = Clamp((int32)((u * UR + v * VR) - (BR) + y1) >> 6);
}
Original comment by fbarch...@google.com
on 11 Dec 2014 at 8:01
Did some test
1. disable neon, commented below codes in android.mk
# ifeq ($(TARGET_ARCH_ABI),armeabi-v7a)
# LOCAL_CFLAGS += -DLIBYUV_NEON
# LOCAL_SRC_FILES += \
# source/compare_neon.cc.neon \
# source/rotate_neon.cc.neon \
# source/row_neon.cc.neon \
# source/scale_neon.cc.neon
# endif
2. modify YuvPixel function in row_common.cc
static __inline void YuvPixel(uint8 y, uint8 u, uint8 v,
uint8* b, uint8* g, uint8* r) {
//------------------------------------------------------------
//<<< original code, the color is *NOT* right
//------------------------------------------------------------
// int32 y1 = ((int32)(y) - 16) * YG;
// *b = Clamp((int32)((u * UB + v * VB) - (BB) + y1) >> 6);
// *g = Clamp((int32)((u * UG + v * VG) - (BG) + y1) >> 6);
// *r = Clamp((int32)((u * UR + v * VR) - (BR) + y1) >> 6);
//------------------------------------------------------------
//<<< formula which original code used, the color is *NOT* right
//------------------------------------------------------------
int32 y1 = ((int32)((y) - 16) * 1.164);
*b = Clamp((int32)(y1 + 2.018*(u - 128)));
*g = Clamp((int32)(y1 - 0.813*(v - 128) - 0.391*(u - 128)));
*r = Clamp((int32)(y1 + 1.596*(v - 128)));
//------------------------------------------------------------
//<<< JPEG conversion formula, the color is right
//------------------------------------------------------------
// *b = Clamp((int32)(y + 1.40200*(u - 128)));
// *g = Clamp((int32)(y - 0.34414*(v - 128)) - 0.71414*(u - 128));
// *r = Clamp((int32)(y + 1.77200*(v - 128)));
}
My APP is running on a custom android device which capture motion-jpg stream
from a web camera, then converted to yuv frames, My APP process the yuv frames
in onPreviewFrame callback, convert the yuv frames to RGB bitmap.
So my APP need use JPEG conversion formula in my scene, am I right?
Is there any way to use neon with JPEG conversion formula?
-----
Another wired thing
I use another function which came from GPUImage project:
https://github.com/CyberAgent/android-gpuimage
try different formulas in this function
method 1 is original, the color is too blue, not right
method 2 is JPEG conversion formula, but the color is also too blue, not right
method 3 is bt 601, the color is right
the test result is opposite
void YUVtoRBGA(jbyte* yuv, jint width, jint height, jint* rgbData)
{
int sz;
int i;
int j;
int Y;
int Y1;
int Cr = 0;
int Cb = 0;
int pixPtr = 0;
int jDiv2 = 0;
int R = 0;
int G = 0;
int B = 0;
int cOff;
int w = width;
int h = height;
sz = w * h;
for(j = 0; j < h; j++) {
pixPtr = j * w;
jDiv2 = j >> 1;
for(i = 0; i < w; i++) {
Y = yuv[pixPtr];
if(Y < 0) Y += 255;
if((i & 0x1) != 1) {
cOff = sz + jDiv2 * w + (i >> 1) * 2;
Cb = yuv[cOff];
if(Cb < 0) Cb += 127; else Cb -= 128;
Cr = yuv[cOff + 1];
if(Cr < 0) Cr += 127; else Cr -= 128;
}
//Method 1 orig
//===== Approximation =============
// R = Y + 1.40625Cr
// G = Y - 0.15625Cb - 0.28125Cr
// B = Y + 1.765625Cb
//=================================
// R = Y + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
// if(R < 0) R = 0; else if(R > 255) R = 255;
// G = Y - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) +
(Cr >> 4) + (Cr >> 5);
// if(G < 0) G = 0; else if(G > 255) G = 255;
// B = Y + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
// if(B < 0) B = 0; else if(B > 255) B = 255;
//Method 2
//==============================
//R = Y + 1.402Cr
//G = Y - 0.34414Cb - 0.71414Cr
//B = Y + 1.772Cb
//==============================
//===== Approximation ==========
// R = Y + 1.40625Cr
// G = Y - 0.34375Cb - 0.71875Cr
// B = Y + 1.765625Cb
//==============================
// R = Y + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
// if(R < 0) R = 0; else if(R > 255) R = 255;
// G = Y - (Cb >> 1) + (Cb >> 3) + (Cb >> 5) - Cr + (Cr >> 2) + (Cr >> 5);
// if(G < 0) G = 0; else if(G > 255) G = 255;
// B = Y + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
// if(B < 0) B = 0; else if(B > 255) B = 255;
//Method 3
//==============================
//R = 1.164Y + 2.018Cr;
//G = 1.164Y - 0.813Cb - 0.391Cr;
//B = 1.164Y + 1.596Cb;
//==============================
//===== Approximation ==========
// R = 1.1640625Y + 2.015625Cr
// G = 1.1640625Y - 0.8125Cb - 0.375Cr
// B = 1.1640625Y + 1.59375Cb
//==============================
// Y = Y + (Y >> 3) + (Y >> 5) + (Y >> 7);
// R = Y + (Cr << 1) + (Cr >> 6);
// if(R < 0) R = 0; else if(R > 255) R = 255;
// G = Y - Cb + (Cb >> 3) + (Cb >> 4) - (Cr >> 1) + (Cr >> 3);
// if(G < 0) G = 0; else if(G > 255) G = 255;
// B = Y + Cb + (Cb >> 1) + (Cb >> 4) + (Cb >> 5);
// if(B < 0) B = 0; else if(B > 255) B = 255;
rgbData[pixPtr++] = 0xff000000 + (R << 16) + (G << 8) + B;
}
}
}
Original comment by Hoo.Zh...@gmail.com
on 18 Dec 2014 at 7:33
Re Is there any way to use neon with JPEG conversion formula?
First version of J420ToARGB is working with C code.
If the formula is right, I'll update the AVX2 and Neon versions.
Original comment by fbarch...@google.com
on 7 Jan 2015 at 3:39
This is reference code used by libyuv
void YUVToRGBReference(int y, int u, int v, int* r, int* g, int* b) {
*r = RoundToByte((y - 16) * 1.164 + (v - 128) * 1.596);
*g = RoundToByte((y - 16) * 1.164 + (u - 128) * -0.391 + (v - 128) * -0.813);
*b = RoundToByte((y - 16) * 1.164 + (u - 128) * 2.018);
}
Microsoft documents it here
https://msdn.microsoft.com/en-us/library/windows/desktop/dd206750%28v=vs.85%29.a
spx
But Android uses a different YUV color space
http://en.wikipedia.org/wiki/YUV#Numerical_approximations
C++ code used on Android to convert pixels of YUVImage:
void YUVImage::yuv2rgb(uint8_t yValue, uint8_t uValue, uint8_t vValue,
uint8_t *r, uint8_t *g, uint8_t *b) const {
*r = yValue + (1.370705 * (vValue-128));
*g = yValue - (0.698001 * (vValue-128)) - (0.337633 * (uValue-128));
*b = yValue + (1.732446 * (uValue-128));
*r = clamp(*r, 0, 255);
*g = clamp(*g, 0, 255);
*b = clamp(*b, 0, 255);
}
Original comment by fbarch...@google.com
on 7 Feb 2015 at 12:38
jpeg color space is quite close
static void YUVJToRGBReference(int y, int u, int v, int* r, int* g, int* b) {
*r = RoundToByte(y - (v - 128) * -1.40200);
*g = RoundToByte(y - (u - 128) * 0.34414 - (v - 128) * 0.71414);
*b = RoundToByte(y - (u - 128) * -1.77200);
}
Original comment by fbarch...@chromium.org
on 17 Mar 2015 at 11:28
Any suggestions on how to proceed on this?
Android is using a different color space for NV12.
Should libyuv support that color space, and if so how would it be exposed,
compared to the standard bt.601 and jpeg color spaces?
Original comment by fbarch...@chromium.org
on 22 Jul 2015 at 10:40
At a lower level, I've refactor YUV to RGB conversions to take a matrix.
There is now I420 for bt601, J420 for jpeg, and H420 to bt.709 color spaces.
The NV12 math above doesnt seem to match any commonly known color space? But
would be possible to express as a matrix and hook up to an efficient
conversion, if there is demand.
Original comment by fbarch...@google.com
on 17 Sep 2015 at 8:13
Reviewing the code above, it looks like you had NV21 which is YVU - V before U.
But the code you used fetched YUV. Swapping U and V essentially swaps B and R
in the destination RGB, although the coefficients are wrong.
Following the git hub issue
https://github.com/CyberAgent/android-gpuimage/issues/140
it looks like you arrived at standard BT.601
which is what NV21ToARGB uses, except you may be using full range?
libyuv uses the constrained video range
// BT.601 YUV to RGB reference
// R = (Y - 16) * 1.164 - V * -1.596
// G = (Y - 16) * 1.164 - U * 0.391 - V * 0.813
// B = (Y - 16) * 1.164 - U * -2.018
While your 'method 4' uses
//===== Approximation ==========
// R = 1.1640625Y + 2.015625Cr
// G = 1.1640625Y - 0.8125Cb - 0.375Cr
// B = 1.1640625Y + 1.59375Cb
On recent versions of libyuv, I'd been using the NV12 code with a different
matrix that accounted for the swapped U and V of NV21. Turns out this code
works on Intel but not arm.
Now on the current version (r1505) I've stopped using that matrix trick and
implemented an NV21ToARGB for Arm as well as Intel.
This allows the low levels to accept the color conversion matrix as a
parameter. The yuv constants currently supported are for standard BT.601,
JPeg, and BT.709
If that was the issue you ran into, then the current version should fix your
issue.
The documentaion I found, arrived at a different matrix entirely
*r = yValue + (1.370705 * (vValue-128));
*g = yValue - (0.698001 * (vValue-128)) - (0.337633 * (uValue-128));
*b = yValue + (1.732446 * (uValue-128));
summary
original issue was U and V were swapped.
the libyuv version of nv21 also had a bug in some versions of the code.
those 2 issue should account for the problem, which should now be fixed. will
mark as fixed.
but as followup, implementing the android specific color space is now easy,
since the low level functions accept a matrix
Original comment by fbarch...@chromium.org
on 8 Oct 2015 at 5:34
Original issue reported on code.google.com by
Hoo.Zh...@gmail.com
on 9 Dec 2014 at 8:53Attachments: