onevcat / Kingfisher

A lightweight, pure-Swift library for downloading and caching images from the web.
MIT License
23.41k stars 2.66k forks source link

ResizingImageProcessor return a low quality image #689

Closed carmelogallo closed 7 years ago

carmelogallo commented 7 years ago

Check List

Thanks for considering to open an issue. Before you submit your issue, please confirm these boxes are checked.

Issue Description

ResizingImageProcessor return a low quality image.

What

Using ResizingImageProcessor the result is low quality image

Reproduce

Just use ResizingImageProcessor

Other Comment

Original Image: https://www.dropbox.com/s/l4rkyppkcjrd468/orginal.jpg?dl=0

This is the code I used:

let scale = UIScreen.main.scale
let resizingProcessor = ResizingImageProcessor(referenceSize: CGSize(width: 100.0 * scale, height: 100.0 * scale))
let url = URL(string: path)
imageView.kf.indicatorType = .activity
imageView.kf.setImage(with: url,
                      options: [.processor(resizingProcessor)],
                      completionHandler: { [ weak self] image, error, cacheType, imageURL in
                          self?.imageView.layer.shadowOpacity = 0.5
                      }
)

Result: https://www.dropbox.com/s/8b2dxmswd6z2y72/low_quality.PNG?dl=0 It seems like a blur effect is applied.

For testing I had to use Toucan for resize the image and the result image is like expected even without using scale factor. Would be nice if you guys can fix this problem so I can avoid to use other library.

let url = URL(string: path)
imageView.kf.indicatorType = .activity
imageView.kf.setImage(with: url,
                      options: nil,
                      completionHandler: { [ weak self] image, error, cacheType, imageURL in
                          guard let image = image else { return }
                          let resizedImage = Toucan(image: image).resize(CGSize(width: 100.0, height: 100.0),
                                                                         fitMode: .crop).image
                          self?.imageView.image = resizedImage
                          self?.imageView.layer.shadowOpacity = 0.5
                      }
)

Result: https://www.dropbox.com/s/q286gjh3jzuheo9/high_quality.PNG?dl=0

onevcat commented 7 years ago

ResizingImageProcessor respects your image scale, so if your image is in 1x scale (it seemly to be your case), while you are targeting a device with 2x or 3x display, you need to set the size according to your screen:

let scale = UIScreen.main.scale
let resizingProcessor = ResizingImageProcessor(referenceSize: CGSize(width: size.width * scale, height: size.height * scale))
carmelogallo commented 7 years ago

@onevcat comment updated.

onevcat commented 7 years ago

@cikpis Can I get the link of your original image? Thanks.

carmelogallo commented 7 years ago

@onevcat here it is https://www.dropbox.com/s/l4rkyppkcjrd468/orginal.jpg?dl=0

onevcat commented 7 years ago

@cikpis Just checked code in Kingfisher and also compared with Toucan. I cannot see real difference between the resizing methods of Toucan and Kingfisher, except the scale handling. In Toucan, the images are drawn with the screen scale, that means 2 in most retina devices and 3 for iPhone x Plus.

By using a larger size in Kingfisher you should be able to get the same result.

Could you please try to make sure that you are setting the image size correctly? You can print the final image size and image scale. The image quality should be the same if the final image size * scale equals each other.

For example, when you set size = (100, 100) for Kingfisher's resizing method and pass an image with scale of 1 (it is the regular case for downloaded image) into it, you will get an image sized to (100, 100) with scale 1, that means 10,000 pixels. However, when you use Toucan with the same size, Toucan will try to convert it to scale 2 at the given size, the result is (100, 100) at scale 2, which means 40,000 pixels. That is the reason why Toucan's result is better. If you pass (200, 200) as the size input to Kingfisher, the result image will be (200, 200) at scale 1, it should be exactly the same quality with the Toucan result.

I tried your image with (500, 500)@1x of Kingfisher and (250, 250)@2x of Toucan, and then set the image to a 250x250 image view. You can see that there is no difference between them:

simulator screen shot 2017 5 28 10 00 47

carmelogallo commented 7 years ago

Using size(width = 325, height = 325)

ResizingImageProcessor

let scale = UIScreen.main.scale
let resizingProcessor = ResizingImageProcessor(referenceSize: CGSize(width: size.width * scale, height: size.height * scale))

image scale = 1.0 image size = 650.0, 650.0

Toucan

let resizedImage = Toucan(image: image).resize(CGSize(width: size.width, height: size.height),
                                                                                     fitMode: .crop).image

image scale = 2.0 image size = 325.0, 325.0

The result is identical of the images I have posted in my previous comment. With ResizingImageProcessor the image seems to be with a blur effect applied.

onevcat commented 7 years ago

Oops. have no idea what happened. Are you trying to set them to an image view with 325x325?

carmelogallo commented 7 years ago

Yes

onevcat commented 7 years ago

Tried again to set images by both result. I also got them from disk and compared side by side:

snip20170528_8

Left is from Kingfisher (650x650@1x) and right is Toucan (325x325@2x). I resized the preview window to make them the same size and I cannot find any difference in detail.

I think I cannot reproduce this issue now, at least in the sample project. I created a branch (https://github.com/onevcat/Kingfisher/tree/issue-689) to include Toucan result in the demo project. Could you try to checkout it and build the Kingfisher-Demo target to see whether the images are the same for you? So we could know whether it is a environment-related issue or not?

Thanks!

carmelogallo commented 7 years ago

Ok man, I might found where the problem is with your library. I changed the code in the link you gave me in order to make it same as mine.

override func collectionView(_ collectionView: UICollectionView, willDisplay cell: UICollectionViewCell, forItemAt indexPath: IndexPath) {

    let url = URL(string: "https://photos-2.dropbox.com/t/2/AAABiXAtTmZiiqeuCwUSVaD0fps9iCb_-6MIDnfP_BzJEw/12/1913294/jpeg/32x32/3/1495998000/0/2/orginal.jpg/EKmXxAEY85LLRiACKAI/lt9cWxuehMA5zQ4CQDMm1X3YkmVIeVkj8IpskhMr8fs?dl=0&size=2048x1536&size_mode=3")!
    let imageView = (cell as! CollectionViewCell).cellImageView!
    imageView.layer.masksToBounds = false
    imageView.layer.shadowColor = UIColor.black.cgColor
    imageView.layer.shadowOpacity = 0.0
    imageView.layer.shadowOffset = CGSize(width: 0, height: 10)
    imageView.layer.shadowRadius = 10
    imageView.layer.shadowPath = UIBezierPath(rect: imageView.bounds).cgPath
    imageView.layer.shouldRasterize = true
    if (indexPath.row == 0) {
        let scale = UIScreen.main.scale
        let p = ResizingImageProcessor(referenceSize: CGSize(width: 325 * scale, height: 325 * scale))
        imageView.kf.setImage(with: url,
                              placeholder: nil,
                              options: [.transition(ImageTransition.fade(1)), .processor(p)],
                              progressBlock: { receivedSize, totalSize in
                                print("\(indexPath.row + 1): \(receivedSize)/\(totalSize)")
        },
                              completionHandler: { image, error, cacheType, imageURL in
                                imageView.layer.shadowOpacity = 0.5
                                print("\(indexPath.row + 1): Finished")
        })
    } else {
        imageView.kf.setImage(with: url,
                              placeholder: nil,
                              options: [.transition(ImageTransition.fade(1))],
                              progressBlock: { receivedSize, totalSize in
                                print("\(indexPath.row + 1): \(receivedSize)/\(totalSize)")
        },
                              completionHandler: { image, error, cacheType, imageURL in
                                (cell as! CollectionViewCell).cellImageView.image = Toucan(image: image!).resize(CGSize(width: 325, height: 325),
                                                                                                                 fitMode: .crop).image
                                (cell as! CollectionViewCell).cellImageView.layer.shadowOpacity = 0.5

                                print("\(indexPath.row + 1): Finished")
        })
    }

}

So, adding the follow code breaks something in your processor

imageView.layer.masksToBounds = false
imageView.layer.shadowColor = UIColor.black.cgColor
imageView.layer.shadowOpacity = 0.0
imageView.layer.shadowOffset = CGSize(width: 0, height: 10)
imageView.layer.shadowRadius = 10
imageView.layer.shadowPath = UIBezierPath(rect: imageView.bounds).cgPath
imageView.layer.shouldRasterize = true
onevcat commented 7 years ago

@cikpis

Oh, I see. You are using shouldRasterize, it is also related to the image scale. I think you could add

imageView.layer.rasterizationScale = UIScreen.main.scale

to avoid this.

carmelogallo commented 7 years ago

Now yes! Finally :) Well, thanks for your time man! Was nice chat with you, cheers!

carmelogallo commented 7 years ago

Hey @onevcat , I just discovered that the fade animation is not working with the previous UIImageView Layer settings. Any clue?

onevcat commented 7 years ago

I think you can never get this work if you use shouldRasterize. The transition are built on UIView animation (UIView.transition) but shouldRasterize requires the layer be rendered as composing, so it is not possible (or not a good idea) to perform animations like fading, since it requires multiple redrawing of the image.

A possible workaround might be setting the shouldRasterize in the completion handler, where the animation is just over (And you may need also to remember to reset it before loading a new image in the same view). However, I am not sure whether this is what you want and/or whether there is risk to introduce performance issue to your UI.

carmelogallo commented 7 years ago

Yeah. In fact I'm doing something tricky for that. Anyway, thanks again 👍

lastcc commented 7 years ago

@onevcat Hi, onevcat!

I also got the low quality image and I already know and noticed the scale problem. But I think it is not necessary for developers to add a new line just to get and set a pixel size like:

let adjustedPixelSize = CGSize(width: adjustedSize.width * UIScreen.main.scale, height: adjustedSize.height * UIScreen.main.scale)

Because almost always, we just want to put the image into an UIImageView, and the size of that view is measured by points.

In my case, the cache code gets a little complex because I don't always download images from the remote. Over 50% of the images are already in the database asset folder and many are PNGs. I need to cache a resized version of the large image to accelerate the display in a table view. ( because decoding PNGs are time consuming.) I also need to cache a memory version of the original images as needed on the fly.

The tactic I am now using is using an alternative key when caching the small preview images:

let imageResource = ImageResource(downloadURL: asset.fileURL, cacheKey: asset.fileURL.absoluteString + "|\(UIScreen.main.bounds.width)")

Note that I am using the absoluteString + "|\(UIScreen.main.bounds.width)" because the Plus devices or future devices may have different display mode.

So, as you can see, for me I have to write all these lines to wire things up. Because the extension method of UIImageView does not provide a cache key param.

And I suggest that there should be a version key, which does not replace the cache key but:

let cacheKey = URL.absoluteString + versionKey
lastcc commented 7 years ago

Ohh, I guess...

There is an identifier key in the Processors... T T

lastcc commented 7 years ago

ResizingImageProcessor takes a CGSize that is Pixel Size. However CroppingImageProcessor takes a size that is Point Size.

The point is: no explaination in the docs.

The BlurImageProcessor takes an argument blurRadius is this pixel or point, kinda confused. T T

private lazy var chainedProcessors: ImageProcessor = {
        let scale = UIScreen.main.scale
        let blurRadius = 2 * scale
        let size = CGSize(width: 27 * scale, height: 27 * scale)
        let anchor = CGPoint(x: 6.5 * scale, y: 6.5 * scale)
        let cornerRadius = 14 * scale

        return (BlurImageProcessor(blurRadius: blurRadius) >> CroppingImageProcessor(size: size, anchor: anchor)) >> RoundCornerImageProcessor(cornerRadius: cornerRadius)
}()

@onevcat sorry man, at you again...

... and isn't the anchor quite counterintuitive?

///   For example, when you have an image size of `CGSize(width: 100, height: 100)`,
///   and a target size of `CGSize(width: 20, height: 20)`: 
///   - with a (0.0, 0.0) anchor (top-left), the crop rect will be `{0, 0, 20, 20}`; 
///   - with a (0.5, 0.5) anchor (center), it will be `{40, 40, 20, 20}`
///   - while with a (1.0, 1.0) anchor (bottom-right), it will be `{80, 80, 20, 20}`

At first I thought anchor point is the origin in the point-based coordinate systems, or the origin point in the unit coordinate systems. But according to the explain in the source code, it is a center point against which the rect is resized into. Unfortunately in (my) practice, the output image's size is a size that is tied to screen coordinates, and the fraction value is hard to reason about.

onevcat commented 7 years ago

@lastcc Hi, all the processors (ResizingImageProcessor, CroppingImageProcessor) will result an output image in point instead of pixel.

The problem might be that you didn't specify a correct scale on loading it. Since in Kingfisher, most of the images would be downloaded from network, there is no such "@2x" or "@3x" convention to decide the scale of an image, Kingfisher leave it to the framework users. If your images should be a 2x image, you need to also pass .scaleFactor(2.0) to the methods in Kingfisher as an option (default is 1.0). The processors you mentioned will respect the image scale. (Please submit an issue if it does not work like that.)

The blur processor will apply a pixel blur radius to the pixel based image (its raw data) and then create a new image based on original image scale. In other word, you should be able to treat the blur radius as a point-based value. My previous comment for the original post might be a bit confusing. Instead of handling scale in the processor, maybe it would be better to create an image in the scale you need to get a correct point-based image first.

The anchor is a related value to the image size and it works fine for my own needs to crop at center or corner. But yes it would be useful if we could also crop at an origin. Any p-r on this would be welcome. If you'd like dismiss more about it, please open another issue.

lastcc commented 7 years ago

@onevcat Let me explain:

No. Even if I provided a correct scale.

I am using the CroppingImageProcessor:

The result image should be in Point Size {27, 27}, scale 3.0 == pixel {82, 82}

If I po resultImage!.size, it should be {27, 27}, and when I po resultImage!.scale it should be 3.0 The actual pixel dimension should be size * scale == pixel {82, 82}

In other words, the size property (of an UIImage) returns a logical size, because Apple defines so.

But the output image is of Logical Size/Point Size {82, 82} with a scale factor set to 3.0, which means a pixel size of {246, 246}.

(lldb) po resultImage!.size
▿ (82.0, 82.0)
  - width : 82.0
  - height : 82.0

(lldb) po resultImage!.scale
  3.0

The code I am using:

let url = URL(string: "https://...")
imageView.kf.setImage(with: url, placeholder: nil, options: [.backgroundDecode, .processor(CroppingImageProcessor(size: CGSize(width: 27, height: 27))), .scaleFactor(UIScreen.main.scale)])

and BlurImageProcessor's identifier is not associated with the scale factor, will this be of an issue? self.identifier = "com.onevcat.Kingfisher.BlurImageProcessor((blurRadius))"

onevcat commented 7 years ago

@lastcc Ah, yes there is an issue in the background decoding. I will give it a fix soon.

Not only blur processor, all processors do not contain the scale as the identifier key. It should not be a problem since generally there should be no case to use different scale in a single device.

wjling commented 6 years ago

Still have the problem as @lastcc mentioned. I have an image, size {100, 40}. I use ResizingImageProcessor with referenceSize {100, 40}, scale 2.0, but the result image has size {100, 40}, scale 2.0. It supposed to be size {50, 20}, scale 2.0. Using Kingfisher 4.8.1

onevcat commented 6 years ago

@wjling The referenceSize is the target (final) size of your image, not the pixel count. For your case, instead of using a ResizingImageProcessor, I guess it should be enough if you just pass .scaleFactor(2.0) as the option when you set an image.

wjling commented 6 years ago

@onevcat Thanks for help. Just pass .scaleFactor(2.0) works well. But I still need ResizingImageProcessor to fit my UIImageView's size. I pass referenceSize in point count instead of pixel count and it works as expected. Thanks again!

isadon commented 5 years ago

@onevcat It seems like the issue mentioned by @lastcc still exists. I don't think we should have to pass in a .scaleFactor that represents the scale of the device always. KingFisher by default should scale the image downloaded to the device screen scale as most other libraries do. Right now if I simply say download x.png from an url Kingfisher will bring it in a 1x scale. Other libraries will bring it in a 2x, 3x depending on the device screen.

onevcat commented 5 years ago

It is not a wise decision to set the scale for framework users. On iOS, @2x or @3x is used to indicate the preferred image scale for a local image. However, it is not the case for web images. A retina image is never a standard and most servers have their own solution.

Loading and converting a pixel-based image to point automatically with screen-scale considered works for some time. But it causes more trouble and users may confuse on why the image could be in the other dimension in different devices, although the original image is the same.

An easy way is just leaving the image scale to 1 and shrink it to a point-based image view. In most time it gives you the same effect as loading an image with a higher scale. Or, make different versions of images for different screen, load it depends on the user's device using the correct .scaleFactor option.

boldijar commented 2 years ago

I have the same issue, but I am not loading into a image view I am loading into a tabBarItem. @onevcat

 let resize = ResizingImageProcessor(referenceSize: CGSize(width: 24.0 * UIScreen.main.scale, height: 24.0 * UIScreen.main.scale), mode: .aspectFill)

With or without the scaling, the image is always pixelated.

 let resize = ResizingImageProcessor(referenceSize: CGSize(width: 24.0, height: 24.0), mode: .aspectFill)
            let processor = resize
            KingfisherManager.shared.retrieveImage(with: url,
                                                   options: [ .processor(processor)],
                                                   completionHandler: { result in
                switch result {
                case .success(let value):
                    let img = value.image
                    self.image = img.withRenderingMode(.alwaysOriginal) // (extension of UITabBarItem)
                    print("ABCD")
                case .failure(let error):
                    print("ABCD Error: \(error)")
                }
            })

And it also looks like is not a 24.0 / 24.0 image ->

image
onevcat commented 2 years ago

In your code I didn't see a target scale factor so it would happen.

Maybe try this and it worked well for me:

let resize = ResizingImageProcessor(referenceSize: CGSize(width: 24.0, height: 24.0), mode: .aspectFill)
        KingfisherManager.shared.retrieveImage(
            with: url,
-           options: [ .processor(processor)],
+           options: [ .processor(processor), .scaleFactor(UIScreen.main.scale)],
//...

Also, I would suggest you print out the image.size and image.scale in the completion handler, to make sure the image size and scale have proceeded as expected.