Closed oliverwolfson closed 11 years ago
There are a few things I see here. First, you shouldn't recreate the filters or blended image for every new image you pass in. Create them and the picture1 instance once, and just swap out the input image by removing its targets and adding the appropriate filter as a target with the new input image. Then you just need to call -processImage to re-run the filter.
Second, don't use a UIImageView as your preview. Use a GPUImageView and target the last filter in the chain to that. UIImageView is expensive in two ways: first, the extraction of a UIImage from a filter requires a costly trip through Core Graphics, and second, the re-rendering of that in a UIImageView then causes the image to be extracted for display to the screen. With a GPUImageView, the content already on the GPU is kept there for display. Also, there's no need to create a new instance of that view every time the filter runs, so just create it once and keep it attached to the filter chain.
Thanks. I can't figure out exactly how to chain the filters together without creating the UIImages between. Is there a resource for learning this? Feeling pretty dense.
It's pretty easy, you just have to use -addTarget: to connect one filter to the next. You'd add your GPUImageView as the last target in the chain. There are examples of this in the Readme, as well as the sample applications. In particular, the SimpleImageFilter example shows how to load an image, filter it, and present it onscreen.
I have the results I want, represented by the code below, but I assume I am not running a very tight show in terms of optimization (or even correct setup) . In the interest of helping understand how these filters should be set up, would anyone be so kind as to point out the errors in my code below.