up to now I have been playing with the example projects and I am very impressed by GPUImage... amazing...
... and I am thinking about the best way to get some numerical data (like some contrast measure) from a small square ROI back into my iOS application. The filter chain would be
GPUImageVideoCamera -> Filter -> Contrast value
As far as I understand GPUImage, the fastest way to operate on the image data is to put such contrast calculations within a filter... or would it be better to operate on the outputBuffer of the filter and do the contrast calculations in the main program...?
In the colorObjectTracking example I saw some code
Hi,
up to now I have been playing with the example projects and I am very impressed by GPUImage... amazing...
... and I am thinking about the best way to get some numerical data (like some contrast measure) from a small square ROI back into my iOS application. The filter chain would be
GPUImageVideoCamera -> Filter -> Contrast value
As far as I understand GPUImage, the fastest way to operate on the image data is to put such contrast calculations within a filter... or would it be better to operate on the outputBuffer of the filter and do the contrast calculations in the main program...?
In the colorObjectTracking example I saw some code
[averageColor setColorAverageProcessingFinishedBlock:^(CGFloat redComponent, CGFloat greenComponent, CGFloat blueComponent, CGFloat alphaComponent, CMTime frameTime) ...
that seems to use numeric values that were calculated within the filtering procedure... but to be honest, I don't really understand the logic...
So my question... what is the easiest way to assess such image properties of a filtered output...?
Thanks in advance Thomas