Open adamatic-me opened 6 months ago
I just found another user's post with the white line on the image, and verified that it was also an image imported from an external camera to iOS, and posted on the iOS Bsky app.
https://bsky.app/profile/repepo.bsky.social/post/3knmf5rc5wb2f
I did some work here https://github.com/bluesky-social/social-app/pull/5464 that might fix this. Let's see. I could never repro the problem in the first place so not sure if this will work.
@haileyok I haven't tested because I don't want to go through building everything just yet. But I think the issue occurs because the function that uses UIKit for iOS (scaleImage) in react-native-image-resizer
can result in fractional pixel values that cause sub-pixel rendering artifacts
UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The size parameter in UIGraphicsBeginImageContextWithOptions
is in points, not pixels. The actual context size in pixels is calculated as size.width * scale
and size.height * scale
.
The resolution of the image that @adamatic-me uploaded is 6000px
wide by 4000px
high. This means that for every 1.5
units of width, there is 1
unit of height. When processed, the image is resized to 2000px
on the long edge. When you divide 2000
by 1.5
, you get 1333.333
, a fraction.
Many common aspect ratios, such as 16:9
, 4:3
, 3:2
, etc., can result in fractional dimensions when resizing.
When size.width
or size.height
are not whole numbers, anti-aliasing can introduce sub-pixel rendering artifacts like a 1px white line.
A way this could be fixed in the current implementation is to round the values to integers:
newSize.width = roundf(newSize.width);
newSize.height = roundf(newSize.height);
This will eliminate fractional pixels. Then use CGRectIntegral
to adjust the rectangle to use integer values for origin and size. This prevents drawing artifacts due to sub-pixel rendering.
A corrected scaleImage
function would look like this:
UIImage* scaleImage (UIImage* image, CGSize toSize, NSString* mode, bool onlyScaleDown)
{
// Calculate the image size in pixels
CGSize imageSize = CGSizeMake(image.size.width * image.scale, image.size.height * image.scale);
CGSize newSize;
if ([mode isEqualToString:@"stretch"]) {
CGFloat width = toSize.width;
CGFloat height = toSize.height;
if (onlyScaleDown) {
width = MIN(width, imageSize.width);
height = MIN(height, imageSize.height);
}
newSize = CGSizeMake(width, height);
} else {
// Either "contain" (default) or "cover": preserve aspect ratio
BOOL maximize = [mode isEqualToString:@"cover"];
CGFloat scale = getScaleForProportionalResize(imageSize, toSize, onlyScaleDown, maximize);
newSize = CGSizeMake(imageSize.width * scale, imageSize.height * scale);
}
// Round newSize to integer values
newSize.width = roundf(newSize.width);
newSize.height = roundf(newSize.height);
// Use the image's scale factor
CGFloat imageScale = image.scale;
// Create context with exact pixel dimensions
UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0);
// Ensure drawing rectangle uses integer values
CGRect drawingRect = CGRectMake(0, 0, newSize.width, newSize.height);
drawingRect = CGRectIntegral(drawingRect);
[image drawInRect:drawingRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Of course, you are ripping that library out shortly to move to Expo. I was looking into how Expo handles this on iOS when I first saw your PR.
Unfortunately, this code can run into the same bug and the same fix applies:
internal func manipulate(image: UIImage, resize: ResizeOptions) throws -> UIImage {
let imageWidth = image.size.width * image.scale
let imageHeight = image.size.height * image.scale
let imageRatio = imageWidth / imageHeight
var targetWidth = imageWidth
var targetHeight = imageHeight
if let width = resize.width {
targetWidth = width
targetHeight = width / imageRatio
}
if let height = resize.height {
targetHeight = height
targetWidth = targetWidth == imageWidth ? imageRatio * targetHeight : targetWidth
}
// Round dimensions to avoid fractional pixels
targetWidth = round(targetWidth)
targetHeight = round(targetHeight)
let targetSize = CGSize(width: targetWidth, height: targetHeight)
// Use the image's scale factor
let scaleFactor = 1.0
UIGraphicsBeginImageContextWithOptions(targetSize, false, scaleFactor)
// Use CGRectIntegral to ensure integral drawing rectangle
let drawingRect = CGRect(origin: .zero, size: targetSize)
image.draw(in: CGRectIntegral(drawingRect))
guard let newImage = UIGraphicsGetImageFromCurrentImageContext() else {
UIGraphicsEndImageContext()
throw NoImageInContextException()
}
UIGraphicsEndImageContext()
return newImage
}
The information and code above are based on what is in Expo SDK 51 (latest stable).
The upcoming version of Expo, Next (unversioned), will implement a new API for image manipulation, you can see the resizing function here. This moves the code away from UIGraphicsBeginImageContextWithOptions
, which was deprecated in iOS 17, and replaces it with UIGraphicsImageRenderer
, supported from iOS 10+.
Unfortunately the issue exists here too. Without rounding, targetSize
may have values like 199.8
pixels, leading to rendering glitches. The fix is the same:
internal struct ImageResizeTransformer: ImageTransformer {
let options: ResizeOptions
func transform(image: UIImage) async -> UIImage {
let imageWidth = image.size.width * image.scale
let imageHeight = image.size.height * image.scale
let imageRatio = imageWidth / imageHeight
var targetWidth = imageWidth
var targetHeight = imageHeight
if let width = options.width {
targetWidth = width
targetHeight = width / imageRatio
}
if let height = options.height {
targetHeight = height
if targetWidth == imageWidth {
// If width wasn't set, calculate it based on the new height
targetWidth = targetHeight * imageRatio
}
}
// **Round dimensions to avoid fractional pixels**
targetWidth = round(targetWidth)
targetHeight = round(targetHeight)
let targetSize = CGSize(width: targetWidth, height: targetHeight)
let format = UIGraphicsImageRendererFormat()
format.scale = 1
format.opaque = false
let renderer = UIGraphicsImageRenderer(size: targetSize, format: format)
return renderer.image { _ in
// **Use an integral drawing rectangle**
let drawingRect = CGRect(origin: .zero, size: targetSize).integral
image.draw(in: drawingRect)
}
}
}
Full disclosure: I have never built code for iOS (just bought my first iPhone last week so I could learn), so I may be wrong about some of this. My recent deep dive into the docs and code was because I wanted to understand how the image downscaling was being implemented after I noticed inconsistencies across platforms, and then saw in your PR that the library was changing.
While the Expo Next iOS transform would be fine with the fix I included above, it would be nice if they implemented high quality interpolation (context.cgContext.interpolationQuality = .high
) instead of relying on the default, as the resulting images would look a lot better, especially when downscaling.
internal struct ImageResizeTransformer: ImageTransformer {
let options: ResizeOptions
func transform(image: UIImage) async -> UIImage {
// Calculate the original image dimensions in pixels
let imageWidth = image.size.width * image.scale
let imageHeight = image.size.height * image.scale
let imageRatio = imageWidth / imageHeight
var targetWidth = imageWidth
var targetHeight = imageHeight
// Determine target dimensions while maintaining aspect ratio
if let width = options.width {
targetWidth = width
targetHeight = width / imageRatio
}
if let height = options.height {
targetHeight = height
if targetWidth == imageWidth {
// If width wasn't set, calculate it based on the new height
targetWidth = targetHeight * imageRatio
}
}
// Round dimensions to avoid fractional pixels
targetWidth = round(targetWidth)
targetHeight = round(targetHeight)
let targetSize = CGSize(width: targetWidth, height: targetHeight)
let format = UIGraphicsImageRendererFormat()
format.scale = 1
format.opaque = false // Set to true if the image is opaque (no transparency) to improve rendering performance. Perhaps some conditional logic based on file type or other way to determine opacity here?
let renderer = UIGraphicsImageRenderer(size: targetSize, format: format)
return renderer.image { context in
// Set interpolation quality to high
context.cgContext.interpolationQuality = .high
// Use an integral drawing rectangle
let drawingRect = CGRect(origin: .zero, size: targetSize).integral
image.draw(in: drawingRect)
}
}
}
If I have time to get this into an Expo PR I will, but I would want to get setup to build for iOS to test these fixes before I do that, and I'm in the middle of audit season at $dayjob. In lieu of that, tagging @lukmccall so he can at least see this and consider it for a future improvement.
FWIW, the web function for image resizing used by Expo Next (canvas with Hermite filter) is great for quality output but could have similar issues under specific conditions.
The Android implementation could use the same fixes – I originally encountered the 1px white line bug last year on Android, as react-native-image-resizer
uses the same method of calculating pixels – along with some other optimizations to improve downscaling quality.
Describe the bug A single white line is added to only some images that I upload on iOS.
To Reproduce
Steps to reproduce the behavior to show the white line on the bottom:
Please note that the same exact image file uploaded via desktop does not have the white line.
Expected behavior
The image should not have an additional white line added to the border when uploading from iOS.
Details
Additional context
While this seems like a niche bug, a lot of photographers use the fancy camera -> iOS -> social post workflow. It is disheartening to put so much work into an image just to have Bsky alter it after posting.
The cause of the white line appears to be related to the source of the image. The only images I have seen this happen to so far, were imported to my iPhone from a different camera. However, this only happens when posted from iOS to Bsky. The line is not added when sharing from iOS to Gmail to desktop post, for example.
Example of post with the additional line from iOS : https://bsky.app/profile/adamatic.bsky.social/post/3knm5mmdksk2m
Example of post with no additional line, with same exact image file sent from iPhone via email to desktop, and then posted to Bsky webapp: https://bsky.app/profile/adamatic.bsky.social/post/3knmalz4nm22g
Additional example of a post, with a white line, also imported from an external camera to iOS: https://bsky.app/profile/adamatic.bsky.social/post/3knbm3deaj22r Please note that here, the line is on the right edge of the posted image. This is due to the fact that the image was rotated to portrait. So, the line seems to appear on the original (sensor) bottom edge of the image.