BradLarson / GPUImage

An open source iOS framework for GPU-based image and video processing
http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework
BSD 3-Clause "New" or "Revised" License
20.25k stars 4.61k forks source link

Overall output video quality #118

Open fotoboer opened 12 years ago

fotoboer commented 12 years ago

Hello again Brad,

I'm really sorry to bother you again, but I have a small question.

I'm using your framework right now to select a prerecorded video from the photoroll and process it using your GPUIImageMovieWriter, which in turn uses the AVAssetWriter class.

I record a piece of video in my own tools with the following preset: AVCaptureSessionPresetiFrame1280x720 as a starting point, which looks like pretty crisp video on the iPhone4S, then i use your saturation filter and the vignetting filter on top of each other like this:

GPUImageSaturationFilter *saturationFilter = [[GPUImageSaturationFilter alloc] init];
[saturationFilter setSaturation:1.5];
GPUImageVignetteFilter *vignettingFilter = [[GPUImageVignetteFilter alloc] init];
[vignettingFilter setupFilterForSize:_size];
[movieFile addTarget:saturationFilter];  
[saturationFilter addTarget:vignettingFilter];

This all works fine, but the quality of the output looks seriously deteriorated on 1280x720 output resolution. A loss of detail to a degree that for me it is not acceptable. Any idea where this happens, I could post a picture with a before and after image to show you what i mean if that is helpful to describe the problem even further.

Anyways I tried to tinker with the AVVideoCompressionPropertiesKey and dictionary to increase the bitrate, ProfileLevelKey and such (i don't really know what i'm doing here so this might work out totally wrong):

NSMutableDictionary * compressionProperties = [[NSMutableDictionary alloc] init];
[compressionProperties setObject:videoCleanApertureSettings forKey:AVVideoCleanApertureKey];
[compressionProperties setObject:videoAspectRatioSettings forKey:AVVideoPixelAspectRatioKey];
[compressionProperties setObject:[NSNumber numberWithInt: 40000000] forKey:AVVideoAverageBitRateKey];
[compressionProperties setObject:[NSNumber numberWithInt: 3] forKey:AVVideoMaxKeyFrameIntervalKey];
[compressionProperties setObject:AVVideoProfileLevelH264Main41 forKey:AVVideoProfileLevelKey];

And I saw some increase in quality but still not to good. Could this have something to do with filtering in the frame buffer or something? With for example this snippet?

    glBindTexture(GL_TEXTURE_2D, outputTexture);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

It looks more or less that a lot of detail gets smeared. I know you commented this compression stuff out, perhaps for a good reason??

Here is a view of what it looks like: http://www.flickr.com/photos/fotoboer/7172998250/

Regards, Jeroen v Goor Appjuice.com

BradLarson commented 12 years ago

The default encoding parameters should be sufficient to produce sharp H.264, which is why I don't change them from the defaults.

You're not forcing the filters to process at a different size at some point? The linear interpolation used for OpenGL ES only applies if there's a change in image size, which shouldn't be happening at any point here if you're pulling from a 720p source and writing to one of that same resolution.

Within the -processMovieFrame: method of GPUImageMovie, if you log out bufferHeight and bufferWidth for your movie, are they 720 and 1280 like they should be? I want to make sure that the movie reader is sending in frames at the correct resolution.

fotoboer commented 12 years ago

Yep the two ints in the method you described log exactly as expected, I removed the vignetting for testing puposes but the resulting quality is the same. I must say that it looks like quality gets lost in aggressive compression. I open footage from the roll in this way:

UIImagePickerController *picker = [[UIImagePickerController alloc] init];
[picker setAllowsEditing:YES];
NSArray *types = [[NSArray alloc] initWithObjects: (NSString *) kUTTypeMovie, nil];
[picker setMediaTypes:types];
picker.videoQuality = UIImagePickerControllerQualityTypeHigh;
[types release];
NSLog(@"mediatypes: %@",types); 
[self presentModalViewController:picker animated:YES];
[picker setDelegate:self];
[picker release];

in the picker delegates method i pick up the url for the processor like so:

-(void)imagePickerController:(UIImagePickerController*)picker didFinishPickingMediaWithInfo:(NSDictionary*)info{
    NSURL *videoURL = [info valueForKey:UIImagePickerControllerMediaURL];
    GPUImageMovie *movieFile = [[GPUImageMovie alloc] initWithURL:videoURL];

etc etc.

Regards Jeroen

BradLarson commented 12 years ago

It could very well be the compression quality, then. Perhaps tinkering with the various compression properties is in order to see if we can match the output of the high quality video type. Different settings might be needed for realtime capture vs. reencoding of a movie.

I had assumed the defaults were safe, and the results looked decent to me, but I probably just didn't look close enough.