Closed iamcam closed 11 years ago
Since I don't know openGL, I decided to try a work-around that will suit my needs, albeit not quite as elegantly as I had hoped. This bit of code uses CoreImage layer the watermark over the image w/o artifacts.
-(UIImage *)alphaBlendImage:(UIImage *)image {
CIImage *watermark = [[CIImage alloc] initWithImage: image];
CIImage *baseLayer = [[CIImage alloc] initWithImage:[self.filter imageFromCurrentlyProcessedOutput]];
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *comboFilter = [CIFilter filterWithName:@"CISourceOverCompositing"];
[comboFilter setDefaults];
[comboFilter setValue: watermark forKey:@"inputImage"];
[comboFilter setValue:baseLayer forKey:@"inputBackgroundImage"];
CIImage *outputImage = comboFilter.outputImage;
CGImageRef cgImg = [context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgImg];
CGImageRelease(cgImg);
return newImg;
}
This will do, but I really would like to know if there's a way to achieve similar results with GPUImage.
This might be related to the use of premultiplied alpha when pulling images through Core Graphics, specifically the kCGImageAlphaPremultipliedFirst option I use in GPUImagePicture and GPUImageUIElement. Look for this line:
CGContextRef imageContext = CGBitmapContextCreate(imageData, (size_t)pixelSizeToUseForTexture.width, (size_t)pixelSizeToUseForTexture.height, 8, (size_t)pixelSizeToUseForTexture.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
Another combination of bitmap settings might produce better results here. I haven't taken the time to experiment with this myself.
I played around with the values and found nothing that worked, BUT on a whim I tried something else ... looking at my CG drawing routine I use to convert the UIView into the UIImage I feed into GPUImage for watermarking.
Setup: I use a full-size image during the GPUImage preview stage, but re-render the view offscreen when capturing and watermarking. So, offscreenView
is that high-res view in the code below.
The original version that produced the stroke effect on the text:
UIGraphicsBeginImageContext(CGSizeMake(picWidth, picWidth));
CGContextRef context = UIGraphicsGetCurrentContext();
[offScreenView.layer renderInContext:context];
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
I think I came to an acceptable solution by changing the first line:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(picWidth, picWidth), NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[offScreenView.layer renderInContext:context];
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
I don't know why UIGraphicsBeginImageContext
is any different than UIGraphicsBeginImageContextWithOptions
but I have evidence to believe that by default, the former is creating an alpha mask from black parts of the view - when I used black text to form the view, it would disappear, but changing it to anything else would make it show up in my GPUImageOutput. Since the latter actually takes the transparency boolean, it probably premultiplies the alpha in the drawing context. See this interesting tidbit about how XCode automatically premultiplies alphas when you import PNG files to the project, via StackExchange.
I don't know how my solution helps GPUImageUIElement as it's currently written since you're using different drawing methods. Perhaps all you need is to use UIKit to render the graphic as I demonstrated above. (I'm just not sure how the GPUIMageUIElement is actually getting the image data into a GL context).
Thoughts?
Within GPUImageUIElement, all of the rendering to the texture takes place within the -update method, where I use the exact same CGBitmapContextCreate as above. The content of the UI element is rendered into that bitmap context using -renderInContext:
There is no premultiplication before this rendering step, which is what led me to believe there was a problem with the bitmap context creation here.
It appears that Core Graphics only accepts premultiplied alpha for its context colorspaces, so we don't have a lot of options there.
I saw that ... perhaps another approach could involve creating the premultiplied image similar to my context for use as the input texture. It's an extra step and I suppose there could be some performance penalties. But maybe it's not as significant as one might think. Since I'm only doing static images when watermarking, I can get away with slightly slower computations.
I'm not going to pursue GPUImageUIElement since I don't foresee needing it. We see that it's possible, but maybe it's just not necessary for most use cases, so I'll let somebody else file a ticket if they something better.
Is there a solution?
I've been putting this off for some time now, but it's really starting to bug me:
When I add a crisp watermark (line art or text) and blend it with the photo, the watermark seems to have a grey stroke around it. It's most noticeable with white text on a light background. I've tried using higher-resolution watermarks and scaling down, alpha blend and GPUImageSourceOverBlendFilter, but it doesn't seem to make a difference. I also noticed that the UIElement filter has more or less a similar effect.
Low resolution: http://screencast.com/t/DOCRkaZwWY
Increasing the resolution on the watermark helps a bit, but doesn't solve it (4x the above resolution): http://screencast.com/t/963agvVf
It seems the alpha is blending down to light gray, but not entirely transparent. Is there any way to improve this situation short of using CPU-based rendering to add the watermark and vector/line art?