Open rebotnix opened 10 years ago
@yu-str / @Ivansss: could you please provide some hints on how to compile for iOS? Could you please share the configure
and make
commands you used for compiling libde265?
i try to figure it more infos on this. i think we have to add the right flags to the configure to compile it for ARM. I started with the gcc for arm that i found in the iOSDeveloper folder. It seems that this compile can´t compile the libde265, i will try the other ones as well. I do not if we have also to set the CFLAGS.
rebotnixMacBookAir:libde265 gary$ ./configure CC=/Developer/Platforms/iPhoneOS.platform/Developer/usr/llvm-gcc-4.2/bin/arm-apple-darwin10-llvm-gcc-4.2
checking build system type... i386-apple-darwin13.2.0
checking host system type... i386-apple-darwin13.2.0
checking target system type... i386-apple-darwin13.2.0
checking how to print strings... printf
checking for gcc... /Developer/Platforms/iPhoneOS.platform/Developer/usr/llvm-gcc-4.2/bin/arm-apple-darwin10-llvm-gcc-4.2
checking whether the C compiler works... no
configure: error: in /Users/gary/Desktop/libde265/libde265': configure: error: C compiler cannot create executables See
config.log' for more details
Well, the app in AppStore uses older version which was written in C. The newest version is c/c++ but I believe you don't need to do anything fancy. You can just add libde265 source files to Xcode project, since it is c/c++ it should just compile. In this case you might need to manually create de265-version.h, this file just contains library version definitions.
Public API is C so you can just import de265.h header to your Obj-C code (if it exposes c++ you might need to convert your Obj-C classes to Obj-C++).
Regarding frames: probably the easiest way to use HEVC frames in UIKit is to convert them from YUV to RGB buffers, then make UIImages(with the help of CGImage and bitmaps) from the buffers and then show the images in UIImageView.
I see, i will try and fight on this :) I try to create an empty xcode and add the required functions.
If you want to cross-compile using configure
, you will also have to pass --host=<target-host>
.
Examples on cross-compiling libraries for iOS can be found here (just did a quick search on Google): http://tinsuke.wordpress.com/2011/02/17/how-to-cross-compiling-libraries-for-ios-armv6armv7i386/ http://stackoverflow.com/questions/11711100/cross-compiling-libogg-for-ios
The same should also apply to libde265
.
Maybe this line is useful for someone. I compiled it for iOS from command-line with this line:
./configure CC=/Developer/Platforms/iPhoneOS.platform/Developer/usr/llvm-gcc-4.2/bin/llvm-c++-4.2 --host=arm-apple-darwin7
Just to confirm: I have just compiled libde265 by putting source files into Xcode project and adding de265-version.h header manually as I described earlier. You can grab de265-version.h.in rename it and change versions to 0x00080000 and "0.8" there.
Thanks for the info. I added the de265 to my project, set the header search path and it compiled. I try now to write an YUV to RGB buffer and some file operations to load my inra generated HEVC frames.
Is there a libde265 mailing list? I looked at the page and do not find info on this.
You could use the YUV2RGB conversion from libde265.js as a first start: https://github.com/strukturag/libde265.js/blob/40432e31d42ad10bf23e8aa210648c893a522285/post.js#L468
i have a project running but i´m not able to display one intra hevc frame with libde65. I tested the fille with the two apps in the AppStore and they also not display image. Do i need maybe at least more then one intraframe? I will try to encode a file with 24FPS and try again.
The same image is working when i use your javascript version. Is the transferred emscripten code to javascript and the AppStore code maybe not the same code base version?
I uploaded the sample intra file here, when you have time you can take a look by yourself in the app: http://slot3.com/temp/oneIntra.hevc
The visualize function seems very nice, i would like to visualize how our test encoder encods intra-frames only.
You can decode single intra frames, but you have to tell the decoder that you pushed in all the data. Otherwise it won't know that there's no data following. You can do this with de265_push_end_of_frame() or de265_flush_data().
I have converted all stuff to obj-c and i think i´m now at the position to display the decoded hevc/intra frame. I do not use vlckit, ffmpeg or other libs like sdl for displaying the intra-frames yet.
When i call the de265 image height and width i receive the right frame size, same on the nal units as well the ycbcr when i call them like this:
int stride,chroma_stride; const uint8_t* y = de265_get_image_plane(img,0,&stride); const uint8_t* cb =de265_get_image_plane(img,1,&chroma_stride); const uint8_t* cr =de265_get_image_plane(img,2,NULL);
As far as i have seen, it seems that i have to convert the ycbcr cause it seems that there is no CGColorSpace Reference in the iOS SDK or i do not find them yet.
I found, CGColorSpaceCreateDeviceCMYK as well CGColorSpaceCreateDeviceRGB.
If you can give me an helping hand to display the frame that would be great. I also have seen that some libs are using openGL for displaying decoded frames that ffmpeg provides, but most of them are in a RGB or YUV profile. Maybe its faster to use openGL then the none GPU pixel displays classes.
i have written some conversations classes but, i do not get an image yet and do not see any debug infos, it compiles and i have NO crashes :)..i tested the decoding with the x86 in a terminal shell with my intra-frame and it works here. Can someone take a look please?
A short code snip set is here, maybe this code helps other developers.
if (img) {
NSLog(@"Image Found!");
width = de265_get_image_width(img,0);
height = de265_get_image_height(img,0);
int stride;
uint8_t *rgbBuffer = (uint8_t *)malloc(width * height * 4);
const uint8_t *yBuffer = de265_get_image_plane(img,0,&stride);
uint8_t val;
int bytesPerPixel = 4;
// YUV TO RGB - cause UIImage cant handle YUV directly
for(int y = 0; y < height*width; y++)
{
val = yBuffer[y];
// Alpha channel
rgbBuffer[(y*bytesPerPixel)] = 0xff;
// next three bytes same as input
rgbBuffer[(y*bytesPerPixel)+1] = rgbBuffer[(y*bytesPerPixel)+2] = rgbBuffer[y*bytesPerPixel+3] = val;
}
// RGB Buffer to UIImage
UIImage *imageCopy = [ImageHelper convertBitmapRGBA8ToUIImage:rgbBuffer withWidth:width withHeight:height];
UIImageView *imageView = [[UIImageView alloc] initWithImage:imageCopy];
[self.window addSubview:imageView];
if (stop) more=0;
else more=1;
// flush libde decoder
// flush decoder cause we only have one by one intra hevc frames...
de265_flush_data(ctx);
....
my image helper for RGB8ToUimage looks like:
+ (UIImage *) convertBitmapRGBA8ToUIImage:(unsigned char *) buffer
withWidth:(int) width
withHeight:(int) height {
size_t bufferLength = width * height * 4;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
if(colorSpaceRef == NULL) {
NSLog(@"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if(pixels == NULL) {
NSLog(@"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
if(context == NULL) {
NSLog(@"Error context not created");
free(pixels);
}
UIImage *image = nil;
if(context) {
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
if([UIImage respondsToSelector:@selector(imageWithCGImage:scale:orientation:)]) {
float scale = [[UIScreen mainScreen] scale];
image = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
} else {
image = [UIImage imageWithCGImage:imageRef];
}
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
if(pixels) {
free(pixels);
}
return image;
}
Your loop
for(int y = 0; y < height*width; y++)
will only work when you are lucky.
Memory outline in general is with stride
bytes per row. I.e. there may be extra, unused bytes at the end of each line. Note also that the stride on your input and output images may be different.
If it's helpful at all, the vcpkg port works fine on iOS
Sorry for posting an issue for my request. I study and learn libde265, specially the transferred emscription version that runs with the help of javascript. I would like to compare the speed of the libde265 javascript vs. a ARM Cortex CPU on my iPad. I have seen that you have published an App in the App Store, but i can't find the source for this app or any simple example how i can compile the libde265 and display decoded HEVC frames in an UIView under iOS.
Can you please help me our with some instructions or example?
Thanks a lot.
Gary