BradLarson / GPUImage

An open source iOS framework for GPU-based image and video processing
http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework
BSD 3-Clause "New" or "Revised" License
20.25k stars 4.61k forks source link

Loading an image into GPUImagePicture from the Assets library triggers a memory warning #251

Open brspurri opened 12 years ago

brspurri commented 12 years ago

When I load a full resolution uiimage from the assets library into GPUImagePicture, I always get a memory warning, and sometimes two. I understand that applying filters to an image ups the memory, but I receive the warning for any full resolution image, regardless of the filter stack I apply.

Has anyone else seen this?

When I resize the image to a smaller resolution (less than 1200x1000) I can get rid of the warning.

What information can I post here to help diagnose? Most of the problem is that I'm not sure what I need to be looking for to solve this.

Cheers, Brett

BradLarson commented 12 years ago

How big an image are we talking about here? How many filters in the chain are being used to process it? What device is this on?

Each intermediate frame of an image will take height x width x 4 bytes to hold it in memory. This is true for the initial input picture, as well as the output from each filter stage. More complex filters even have multiple stages within them.

brspurri commented 12 years ago

Thanks for the response Brad. I'm ever impressed with your patients in answering all these questions :-)

I am talking about a full size image taken with the iPhone 4S camera app (resolution of 3264 × 2448). I get the warnings when I add a single filter (I am using the GPUImageTransform filter as the first filter). When I move to apply a filter group, I get a memory crash after a second - or if I'm lucky a third - filter is applied. Obviously, as I add more filters to the chain, a memory crash is to be expected. At this point, I'd just like to have a single filter be applied without the memory warning.

Is there anything I can look at? Or anything I can post to help diagnose this?

Cheers, Brett

peterhwong commented 12 years ago

Hey Brett, Did you ever get this resolved?

brspurri commented 12 years ago

Nope. My (hopefully temporary) solution is to just to cap the resolution at about half that of a full size image. That seems to do the trick. My app has a GPUImageVideoCamera instance (with a few filters applied) as well as a GPUImagePicture (which I am loading the image from the assetslibrary into), also with e few filters. So I think the real issue is that I just have a lot of high-memory objects floating around in my app.

If you have any ideas to circumvent this, I'd love to hear your thoughts.

peterhwong commented 12 years ago

I'm running into this too which might be a deal breaker. PM wants full resolution images with anywhere from 10-20 filters applied on a live video camera. At first I was getting memory warnings when I was up to about 5 filters. Now when I'm at 10, when I try to take a picture on the 4s, it just locks up the device. I've tried using forceProcessingAtSize and that does help but I don't get the full res output. Brad mentioned he was working on tiling but I'm not sure how far he is on that. Brad, is the above possible with tiling? If so, when do you think it will be ready? Thanks

BradLarson commented 12 years ago

10-20 filters applied in a chain for a photo just isn't a realistic goal under the current architecture of the framework. Each one of those filters has its own backing framebuffer, which is width x height x 4 bytes in size. Some filters have more than one, because they contain several subfilters within them (edge detection, selective blur, etc.).

One way to work around this would be to combine filters. Several of the simpler color processing filters can easily be combined into a single fragment shader in a custom filter, which would reduce the number of passes required. Others could be optimized to reduce passes.

Any tiling implementation is a long ways off, because I simply can't keep up with the volume of issues in the project any longer, and my efforts are all focused on the machine vision side rather than standard image filtering.

brspurri commented 12 years ago

Update to my previous comment:

I was able to fake this by doing running an insanely time-expensive loop, which kept the memory down. I snap a full resolution photo with only a pass-through filter (discussed elsewhere on these forums), and then created a GPUImagePicture using the unmodified image as the input. I then looped through the desired filters in the filter group and apply them one by one (one per loop iteration). At the end of each filter iteration, I grab the current UIImage (after the single filter is applied), then I set the GPUImagePicture instance to nil to clear the memory. Then on the next filter iteration, it just repeats. I've tested it with 20 filters or so, but since the memory never really changes, I would assume it would work for many more.

I doubt it would work for blends (don't really use them), but who knows. This way, the memory stays at about the 1-filter level the entire time. This clearly isn't how the framework is designed, but it seems to do the trick for me...at least for applying a combination of simple filters.

I would assume the best idea would be as @BradLarson has said...that is to combine your desired filters best you can in a custom shader.

peterhwong commented 12 years ago

brspurri, Thanks for the example. Is the time expensive loop more than a couple of seconds for 20 filters?

Brad, could you elaborate on the memory required for each filter? Could you use only one buffer for a filter group? The framework is so fast, when does it allocate this memory? If forceProcessingAtSize is used on the first filter in a group, does this lower the required memory for each filter?

On Aug 2, 2012, at 9:02 AM, brspurri wrote:

Update to my previous comment:

I was able to fake this by doing running an insanely time-expensive loop, which kept the memory down. I snap a full resolution photo with only a pass-through filter (discussed elsewhere on these forums), and then created a GPUImagePicture using the unmodified image as the input. I then looped through the desired filters in the filter group and apply them one by one (one per loop iteration). At the end of each filter iteration, I grab the current UIImage (after the single filter is applied), then I set the GPUImagePicture instance to nil to clear the memory. Then on the next filter iteration, it just repeats. I've tested it with 20 filters or so, but since the memory never really changes, I would assume it would work for many more.

I doubt it would work for blends (don't really use them), but who knows. This way, the memory stays at about the 1-filter level the entire time. This clearly isn't how the framework is designed, but it seems to do the trick for me...at least for applying a combination of simple filters.

I would assume the best idea would be as @BradLarson has said...that is to combine your desired filters best you can in a custom shader.


Reply to this email directly or view it on GitHub: https://github.com/BradLarson/GPUImage/issues/251#issuecomment-7459704