BradLarson / GPUImage

An open source iOS framework for GPU-based image and video processing
http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework
BSD 3-Clause "New" or "Revised" License
20.25k stars 4.61k forks source link

AverageColorFilter: Total Number of Pixel #680

Closed artoq closed 11 years ago

artoq commented 11 years ago

Hi there,

at the AverageColorFilter the (CGFloat)totalNumberOfPixels is used in your calculation. As far as I understand it is rounded by the width and height of the frame, is that correct? So with a AVCApturePreset of 640x480 that makes a totalNumberofPixel = 70.0

I'm just diving into shader programming, so here's my question: The 70.0 is the (expected) number of fragments per frame? Or to be more precise, with these settings an incoming frame is "divided" into 70 fragments and the output of the filter is the average color of the 70 color-averaged fragments?

Just in case I did understand how the above works: Is there a way to manipulate the number of fragments (let's call it resolution)...like to set the fragment size?

Thanks for helping

artoq commented 11 years ago

Ok still not rly sure how the filter works in detail... are you using Bilinear Filtering and Mip Mapping?

BradLarson commented 11 years ago

This uses a multistage reduction of the image, relying on hardware interpolation to reduce the image sixteenfold at each step (4x in each direction). At the final stage, it just reads the raw pixels of the image at that step and averages the colors at that terminal image.

artoq commented 11 years ago

Thanks alot! Do you know if it's a simple linear interpolation which is used by the hardware during downsampling?

Just to give you a context why I'm so into your mathematic: I try to build a filterchain colorFilter --> colorAvarageFilter using the alpha value as flag if the pixel contribute to the avarage calculation or not. So the pixel is set to (0,0,0,0) if outside threshold and (r,g,b,1) if inside an then given to the average filter, where alpha total would give me the number of used pixel. Obviously that didn't work out, since the 0 or 1 alpha value gets interpolated and averaged quite alot...

BradLarson commented 11 years ago

It is a linear interpolation between neighboring pixels on each reduction. I tested this against a pixel-by-pixel averaging of several images and found it to be accurate within the rounding precision of 8-bit color components.

This averaging and weighting of the threshold values is just what I do in the ColorObjectTracking example, and that seems to work fine right now.

artoq commented 11 years ago

Just one last thing just to be sure I got it and then it's okey:

"The reducing process is done by the hardware", that means, by using the fragment shader? At the final stage the fragment shader isn't called, but each pixel's color is added up on the CPU

It might be wrong to say the fragment shader "is called" but I used that expression just to be sure, that the reduction is done by the shader and not by any, well, lets say automatic interpolation functions somewhere hidden. Is that correct?

Thank you very much for your answers

BradLarson commented 11 years ago

At each stage, the image is reduced by a multiple in X and Y. A fragment shader is run for each pixel in the output image, which is smaller in dimension than the output from the previous stage. That fragment shader reads from the larger image of the previous pass, but does so by reading from between texels in that image to take advantage of hardware interpolation. The interpolated pixels it reads are then averaged together to produce one average output color.

To deal with edge artifacts and non-power-of-four image dimensions, I read the pixels of the last stage image and do all of the remaining averaging on the CPU.