BradLarson / GPUImage

An open source iOS framework for GPU-based image and video processing
http://www.sunsetlakesoftware.com/2012/02/12/introducing-gpuimage-framework
BSD 3-Clause "New" or "Revised" License
20.25k stars 4.61k forks source link

GPUImageRawDataInput not padded correctly #1517

Open scisci opened 10 years ago

scisci commented 10 years ago

If image data is passed to GPUImageRawDataInput that is not a multiple of 8, then the image appears skewed. I'd like to fix this but not sure how to specify a texture size different from uploadedImageSize

topfortune commented 10 years ago

Yeah, I also met this issue. I uploaded a 568x320 size image data to GPUImageRawDataInput, and after filtered, I got twisted image in GPUImageView.

twisted

Do you have any solution for this?!

scisci commented 10 years ago

I think you need to request a texture frame buffer that has a multiple of 8 or 16 pixels (not sure which is necessary) like the code below.

outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:pixelSizeToUseForTexture onlyTexture:YES];
        [outputFramebuffer disableReferenceCounting];

But then notify targets of the actual input size.

[currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget];

I'm not sure if this is the correct way but will try something like this in the next few days.

topfortune commented 10 years ago

Thanks to @scisci

I don't know much about the GPUImage framework. So I don't know how to add your code into my program.

I am working on a streaming live app on iOS. After I receive an encoded video pack from server, I decode it, and get the BGRA data (in unsigned char* array). Then I use the BGRA data to create GPUImageRawDataInput object, and add a filter to the GPUImageRawDataInput object. Finally I add a GPUImageView to the filter to display the result. Here is my code:

unsigned char* decodedDataBytes;

// after decode the video package

rawDataInput = [[GPUImageRawDataInput alloc] initWithBytes:decodedDataBytes size:CGSizeMake(videoWidth,videoHeight)];
filter = [[GPUImageSepiaFilter alloc] init];
[rawDataInput addTarget:filter];
[filter addTarget:(GPUImageView*)self.view];

[rawDataInput processData];

When the video size is 568x320, the result image is twisted. But when size is 1280x720 or 320x180, the result is correct.

scisci commented 10 years ago

You might be able to do a GPUImageCropFilter to crop to a multiple of 8 pixels, before going in to your sepia filter.

topfortune commented 10 years ago

Thank you @scisci

I fixed this. I found the padding for GPUImageRawDataInput is a multiple of 16 pixels!

scisci commented 10 years ago

I finally solved this issue for myself if anyone is interested.

You need to make sure the data going into the the filter is a multiple of 16 pixels. I then had add/modify the filter code as follows:

@interface GPUImageRawDataInput : GPUImageOutput
{
  CGSize uploadedImageSize;
  CGSize scaledOutputImageSize;

  dispatch_semaphore_t dataUpdateSemaphore;
}

- (void)setOutputImageSize:(CGSize)imageSize
{
  scaledOutputImageSize = imageSize;
}

- (CGSize)outputImageSize;
{
  return scaledOutputImageSize;
}

So you would scale your input to a multiple of 16 pixels but then you can reset the outputImageSize of the filter back to what the original size was supposed to be to make sure any remaining filters get the original size and not the scaled size. @BradLarson Let me know if you think this is the correct approach and I could submit a pull request.