Closed kthacker-hike closed 7 years ago
This has noticeable performance advantages. If you ask for direct RGB data from AV Foundation, it has to perform this conversion internally, which I've found to be slower and more CPU-intensive than doing so in a shader. This might be less noticeable in current iOS versions and devices, but with older devices and iOS versions it provided a worthwhile speedup.
There are also memory advantages, in that YUV planes are significantly smaller than RGB data, and I can reuse the texture used as output from the RGB conversion later in the pipeline.
Why are you guys recording as YUV and then converting it to RGB?
I am referring to the code in
GPUImageVideoCamera.m
where you guys go:[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
and then do
if (isFullYUVRange) { [self changeFramentShader:kGPUImageYUVFullRangeConversionForLAFragmentShaderString]; }
Which is basically a fragment shader that converts yuv to rgb