adamwulf / JotUI

OpenGL based drawing view built for and used by Loose Leaf for iPad
http://getlooseleaf.com/opensource/
MIT License
268 stars 28 forks source link

JotUI becomes blurred after zooming in #41

Open CoderMaurice opened 5 years ago

CoderMaurice commented 5 years ago

Hello adam: I am working on a handwritten project now (using JotUI) but I don't know much about OpenGL, here is the problem: The screenshot below is my app. I used two JotViews (I will call the above as A, the following as B), they have the same size, B is in a scrollview, it is responsible for writing and zoom, and A is responsible for displaying the strokes written on B . Because of the zoom, the strokes on B will be blurred. Do you have a better implementation? How to render the focus rect? Or double the size of the strokes and canvas.

image

adamwulf commented 5 years ago

The blur doesn't seem to be because of the zoom, it looks like it's from the brush texture. you can see in A that the edges have that same lightness on the edges. if you change to a harder edged brush texture it should sharpen up.

CoderMaurice commented 5 years ago

Yes, the blur is because of the texture. It won't be very bad if it is not zoom in. If zoom in, the blur will be very serious.

CoderMaurice commented 5 years ago

image

adamwulf commented 5 years ago

Since JotView rasterizes what's written, some amount of blur will be expected. You can see some blurring in Loose Leaf as well if you pinch to zoom a page, it'll pixelate slightly.

the only options for getting around that are to increase the resolution of the B view, so that it's double resolution for instance, with a 50% scale transform. then allow scale up to 100% the true resolution of B, which would be twice the resolution/size of A. The big downside to this is the memory consumption of the larger resolution. Then it's just a matter of picking a) how much you want to scale up the resolution of B compared to A b) what to set the max zoom in level for the user. it's a trade off of quality vs memory

CoderMaurice commented 5 years ago

Thank you for your reply, I will try other ways to finish it. btw, how to export a transparent background image (only strokes), I did not find anywhere to set transparency of the context

adamwulf commented 5 years ago

JotView will export a transparent image with only strokes by default. are you loading in an opaque image before drawing by chance?

CoderMaurice commented 5 years ago

It seems that image was given a white background when creating the bitmapContext. Even just created the context and generated the image, it is still a white background image. CGContextRef bitmapContext = CGBitmapContextCreate(NULL, exportSize.width, exportSize.height, 8, exportSize.width * 4, colorspace, kCGImageAlphaPremultipliedLast); cgImage = CGBitmapContextCreateImage(bitmapContext);

adamwulf commented 5 years ago

I just tested the jotuiexample and it is indeed saving a transparent image with only ink visible. how are you saving and loading the image to see if it's transparent or not?

CoderMaurice commented 5 years ago

I made a stupid mistake, I saved the image to photo album :D

adamwulf commented 5 years ago

ah that would do it :) glad its an easy fix

CoderMaurice commented 5 years ago

There are some handwritten notes apps like Noteshelf that have very smooth lines, no matter how thin or how magnified the strokes are, they don't blur. Any ideas ?

adamwulf commented 5 years ago

instead of a raster based view like JotUI, you'd need a vector based view. One strategy i've seen show smooth results at any zoom was for each stroke to use a CAShapeLayer, with the CGPath defining the path of the stroke. i don't know of any libraries that offer that, though i suspect they're out there somewhere.

CoderMaurice commented 5 years ago

u mean that they are not using OpenGL? It must be very difficult to Draw line with velocity using CAShapeLayer.

CoderMaurice commented 5 years ago

In JotUI, instead of using images as brush textures, just draw a lot of dots(maybe GL_POINTS?), If this will make the blur better?

adamwulf commented 5 years ago

the problem isn't open gl, it's that it's rasterized at all. the zoom is being done by scaling the view itself, which will blur the rasterized content. that's why CALayers perform better at zoom, since a transform will scale them as vector instead of raster.

CoderMaurice commented 5 years ago

Hi adam, I worked a few days and now using another way to make the strokes clear. When a stroke ends, i render a stroke image , double the stroke bound and the stroke width. This will get higher definition.

But I have a strange problem and I can't find the reason, When I double the stroke bound and the stroke width, sometimes the stroke may miss some parts.

This is a few strokes rendered: 57518667-21020500-734c-11e9-8d3e-acddb89058f7

Red lines are the missing part: IMG_1842

I just doubled the (x, y) and width of all elements in stroke and created a new stroke to render. This missing parts are like being cut off. Hope I can get your advice.

Best Regards, Maurice

adamwulf commented 5 years ago

I can't think of anything offhand that would cause that. I suggest stepping through renderAllStrokesToContext and making sure all strokes/elements are in fact getting drawn. Also check that it's not using the scissorRect which might be clipping some elements

The CurveToPathElement also has an internal cache in _dataVertexBuffer and _vbo, so make sure that those are properly reset as well after you resized the elements. You might need to call generatedVertexArrayForScale: on each element after you create the new stroke.

CoderMaurice commented 5 years ago

I found that the problem should be in the size of Context. The cut part is always on the top or right side, corresponds to the opengl coordinate system. Is Context's maximum size limited to SCREEN.Size * SCREEN.Scale? In order to render a higher-definition stroke, I need to enlarge the stroke, so the size of the export image may be larger than limit size.

Here are two wrong results for export stroke: A1: IMG_1848 A2: “IMG_1848”的副本 B1: IMG_1859 2 B2: IMG_1859

This is the code I modified from exportToImageOnComplete:, Its role is to render an image of a stroke.

- (void)exportAStroke:(JotStroke *)stroke onComplete:(void (^)(UIImage*))exportFinishBlock
{
    CheckMainThread;

    if (!exportFinishBlock)
        return;

    if (![imageTextureLock tryLock]) {
        exportFinishBlock(nil);
        return;
    }

    dispatch_async([JotView importExportImageQueue], ^{
        @autoreleasepool {

            __block CGImageRef cgImage;
            __block UIImage* image;

            JotGLContext* secondSubContext = [[JotGLContext alloc] initWithName:@"JotViewExportToImageContext" andSharegroup:mainThreadContext.sharegroup andValidateThreadWith:^BOOL {
                return [JotView isImportExportImageQueue];
            }];

            CGRect strokeRect = stroke.bounds;
            CGSize strokeSize = strokeRect.size;
            CGFloat scale = [UIScreen mainScreen].scale;
            CGFloat strokeScale = 2;
            JotStroke *newStroke  = [stroke fillStrokeElementsWithScale:strokeScale offset:CGPointMake(-strokeRect.origin.x, -strokeRect.origin.y) inContext:secondSubContext];

            if (!newStroke) {
                exportFinishBlock(nil);
                return;
            }

            CGSize fullSize = CGSizeMake(ceilf(strokeSize.width) * scale * strokeScale, ceilf(strokeSize.height) * scale * strokeScale);
            CGSize exportSize = fullSize;

            [secondSubContext runBlock:^{
                @autoreleasepool {

                    [secondSubContext glDisableDither];
                    [secondSubContext glEnableBlend];
                    [secondSubContext glBlendFuncONE];
                    [secondSubContext glViewportWithX:0 y:0 width:(GLsizei)fullSize.width height:(GLsizei)fullSize.height];
                    CGSize targetTextureSize = exportSize;
                    JotGLTexture* canvasTexture = [[JotTextureCache sharedManager] generateTextureForContext:secondSubContext ofSize:targetTextureSize];
                    [canvasTexture bind];
                    GLuint exportFramebuffer = [secondSubContext generateFramebufferWithTextureBacking:canvasTexture];
                    [secondSubContext colorlessPointProgram].canvasSize = GLSizeFromCGSize(fullSize);
                    [secondSubContext coloredPointProgram].canvasSize = GLSizeFromCGSize(fullSize);
                    [secondSubContext bindFramebuffer:exportFramebuffer];
                    [secondSubContext assertCheckFramebuffer];
                    [secondSubContext glViewportWithX:0 y:0 width:fullSize.width height:fullSize.height];
                    [secondSubContext clear];

                    // ************** render start **************

                    [newStroke lock];
                    [newStroke.texture bind];
                    AbstractBezierPathElement* prevElement = nil;
                    for (NSInteger i = 0; i < newStroke.segments.count; i++) {
                        AbstractBezierPathElement *element = newStroke.segments[i];
                        [self renderElement:element fromPreviousElement:prevElement includeOpenGLPrepForFBO:nil toContext:secondSubContext scale:scale];
                        prevElement = element;
                    }
                    [newStroke.texture unbind];
                    [newStroke unlock];

                    // ************** render end **************

                    // read the image from OpenGL and push it into a data buffer
                    NSInteger dataLength = fullSize.width * fullSize.height * 4;
                    GLubyte* data = calloc(fullSize.height * fullSize.width, 4);
                    if (!data) {
                        @throw [NSException exceptionWithName:@"Memory Exception" reason:@"can't malloc" userInfo:nil];
                    }
                    // Read pixel data from the framebuffer
                    [secondSubContext readPixelsInto:data ofSize:GLSizeFromCGSize(fullSize)];
                    // now we're done, delete our buffers
                    [secondSubContext unbindFramebuffer];
                    [secondSubContext deleteFramebuffer:exportFramebuffer];
                    [canvasTexture unbind];
                    [[JotTextureCache sharedManager] returnTextureForReuse:canvasTexture];
                    // Create a CGImage with the pixel data from OpenGL
                    // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
                    // otherwise, use kCGImageAlphaPremultipliedLast
                    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
                    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
                    CGImageRef iref = CGImageCreate(fullSize.width, fullSize.height, 8, 32, fullSize.width * 4, colorspace, kCGBitmapByteOrderDefault |
                                                    kCGImageAlphaPremultipliedLast,
                                                    ref, NULL, true, kCGRenderingIntentDefault);

                    // ok, now we have the pixel data from the OpenGL frame buffer.
                    // next we need to setup the image context to composite the
                    // background color, background image, and opengl image

                    // OpenGL ES measures data in PIXELS
                    // Create a graphics context with the target size measured in POINTS
                    CGContextRef bitmapContext = CGBitmapContextCreate(NULL, exportSize.width, exportSize.height, 8, exportSize.width * 4, colorspace, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast);
                    if (!bitmapContext) {
                        @throw [NSException exceptionWithName:@"CGContext Exception" reason:@"can't create new context" userInfo:nil];
                    }

                    // can I clear less stuff and still be ok?
                    CGContextClearRect(bitmapContext, CGRectMake(0, 0, exportSize.width, exportSize.height));

                    if (!bitmapContext) {
                        @throw [NSException exceptionWithName:@"BitmapContextException" reason:@"Cannot generate bitmap context" userInfo:nil];
                    }

                    // flip vertical for our drawn content, since OpenGL is opposite core graphics
                    CGContextTranslateCTM(bitmapContext, 0, exportSize.height);
                    CGContextScaleCTM(bitmapContext, 1.0, -1.0);

                    //
                    // ok, now render our actual content
                    CGContextDrawImage(bitmapContext, CGRectMake(0.0, 0.0, exportSize.width, exportSize.height), iref);

                    // Retrieve the UIImage from the current context
                    cgImage = CGBitmapContextCreateImage(bitmapContext);
                    if (!cgImage) {
                        @throw [NSException exceptionWithName:@"CGContext Exception" reason:@"can't create new context" userInfo:nil];
                    }

                    image = [UIImage imageWithCGImage:cgImage scale:1 orientation:UIImageOrientationUp];

                    // Clean up
                    free(data);
                    CFRelease(ref);
                    CFRelease(colorspace);
                    CGImageRelease(iref);
                    CGContextRelease(bitmapContext);
                }
            }];

            [JotGLContext validateEmptyContextStack];
            [[MMMainOperationQueue sharedQueue] addOperationWithBlock:^{
                [imageTextureLock unlock];

                dispatch_async(dispatch_get_main_queue(), ^{
                    // ok, we're done exporting and cleaning up
                    // so pass the newly generated image to the completion block
                    @autoreleasepool {
                        //                        UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
                        exportFinishBlock(image);
                        CGImageRelease(cgImage);
                    }
                });
            }];
        }
    });
}
adamwulf commented 5 years ago

The max is higher than SCREEN.Size * SCREEN.Scale but I don't remember offhand what it is. You can increase that to whatever you want, and you'll find out real quick what the max is. it'll break immediately if the size is too large, so just increase to whatever you need to see if it works. The max is determined by OpenGL and i believe is device dependent, though it may be OS version dependent.