BradLarson / GPUImage

An open source iOS framework for GPU-based image and video processing
http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework
BSD 3-Clause "New" or "Revised" License
20.24k stars 4.61k forks source link

Why capture as yuv when you are converting to rgb anyway? #2499

Closed kthacker-hike closed 7 years ago

kthacker-hike commented 7 years ago

Why are you guys recording as YUV and then converting it to RGB?

I am referring to the code in GPUImageVideoCamera.m where you guys go:

[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];

and then do if (isFullYUVRange) { [self changeFramentShader:kGPUImageYUVFullRangeConversionForLAFragmentShaderString]; }

Which is basically a fragment shader that converts yuv to rgb

BradLarson commented 7 years ago

This has noticeable performance advantages. If you ask for direct RGB data from AV Foundation, it has to perform this conversion internally, which I've found to be slower and more CPU-intensive than doing so in a shader. This might be less noticeable in current iOS versions and devices, but with older devices and iOS versions it provided a worthwhile speedup.

There are also memory advantages, in that YUV planes are significantly smaller than RGB data, and I can reuse the texture used as output from the RGB conversion later in the pipeline.