First, let me say that your code here has brilliantly solved the problem of displaying images with dimensions larger than the maximum device texture size. This is exactly what I needed.
I'd like to suggest a performance optimization.
From my experience, resizing the image as part of drawing is relatively expensive. I've implemented a more efficient alternative that has worked well for me.
Suppose we have a large image file named "bigimage.tiff", and suppose we've read, decoded and defined a CIImage instance for that image called bigimage.
Now consider a scenario where our CIImage is built from bigimage stacked with many chained filters, say, CIColorMatrix, CIColorCube, custom kernels, etc. Suppose we want to zoom to fit, and the final image scale is some value we'll call scale. If we scale this image and render it, Core Image renders the filter stack over the full sized image before applying the scale transform.
As an alternative, suppose we apply scale to bigimage, then apply our filter stack. The resulting image is still scaled as expected, but Core Image applies the filter stack to the scaled image before rendering. The result is a significant performance improvement.
(Hmmm... it's just occurred to me that we might want to apply scaleafter the other filters when scale > 1.)
Strictly speaking, these are not the same images, but I've found them to be visually indistinguishable for the purposes of display.
Hello,
First, let me say that your code here has brilliantly solved the problem of displaying images with dimensions larger than the maximum device texture size. This is exactly what I needed.
I'd like to suggest a performance optimization.
From my experience, resizing the image as part of drawing is relatively expensive. I've implemented a more efficient alternative that has worked well for me.
Suppose we have a large image file named "bigimage.tiff", and suppose we've read, decoded and defined a
CIImage
instance for that image calledbigimage
.Now consider a scenario where our CIImage is built from
bigimage
stacked with many chained filters, say,CIColorMatrix
,CIColorCube
, custom kernels, etc. Suppose we want to zoom to fit, and the final image scale is some value we'll callscale
. If we scale this image and render it, Core Image renders the filter stack over the full sized image before applying the scale transform.As an alternative, suppose we apply
scale
tobigimage
, then apply our filter stack. The resulting image is still scaled as expected, but Core Image applies the filter stack to the scaled image before rendering. The result is a significant performance improvement.(Hmmm... it's just occurred to me that we might want to apply
scale
after the other filters whenscale > 1
.)Strictly speaking, these are not the same images, but I've found them to be visually indistinguishable for the purposes of display.