photopea / UPNG.js

Fast and advanced PNG (APNG) decoder and encoder (lossy / lossless)
MIT License
2.08k stars 256 forks source link

small target palettes miss key colors #43

Closed leeoniya closed 4 years ago

leeoniya commented 4 years ago

hey @photopea

it looks like UPNG suffers from this like other octree-type partitioning quantizers. for example, a 32 color target palette produces this:

original: lily-pond

UPNG @ 32 colors:

lily-pond

RgbQuant produces this:

rgbquant-32-lily

this issue makes it implausible to implement a high quality dither using the resulting palette.

have you considered implementing an improvement like https://www.hindawi.com/journals/cin/2016/5302957/ ?

photopea commented 4 years ago

So you are saying RgbQuant gives a better result than UPNG? But why? Have you tried to calculate the difference with the original? Which one is "closer"?

leeoniya commented 4 years ago

So you are saying RgbQuant gives a better result than UPNG?

subjectively, it does. (though the current RgbQuant code is much slower..i'm going to work on this and try some new ideas). the original image very obviously has prominent yellow and orange colors, which are completely absent from the UPNG palette - and no dither can fix this. RgbQuant's palette captures the fundamental colors much better; i don't think we need anything except our eyeballs to see that.

Have you tried to calculate the difference with the original?

i did not, but if i calculated it and it turned out that the cumulative error of UPNG was smaller than that of RgbQuant, then i would seriously question the validity of the automated metric (in this instance).

photopea commented 4 years ago

If we were to talk about which image is more pleasant to our eyes, we would argue here for years.

You have to specify an exact metric, a mehtod, which will tell, what image is better. Without such method, there is no point in changing anything, since we don't know what we want to achieve.

photopea commented 4 years ago

Also, the grass in the UPNG version is much nicer! :D

leeoniya commented 4 years ago

Also, the grass in the UPNG version is much nicer! :D

it is! but the premature loss of essential colors in the palette is, in my opinion, a fundamental failure of a color reduction algorithm. to me, the extra detail in the grass is much less obviously noticeable, and much less important than losing a significant and obvious part of the color wheel. i expect to lose detail and have colors bleed together, but i don't expect drastic color elimination when it can clearly be avoided. plus, a lot of the missing spatial detail can be recovered by dithering the RgbQuant version, whereas nothing can be done to recover colors from a poorly chosen palette.

rgbquant-32-lily-floyd

photopea commented 4 years ago

"essential", "significant", these words do not mean anything, I think my palette is better :D

leeoniya commented 4 years ago

ok, we'll agree to disagree :), but i cannot use the palettes generated by UPNG to attain the dithered results i'm looking for.

from https://github.com/photopea/UNN.js/issues/1#issuecomment-571977619:

But you can use UPNG.js only to get a palette, and generate dithered images yourself.

this turns out to be untenable.

photopea commented 4 years ago

I am not saying your palette is better or worse. But you should "have numbers" instead of describing your feelings.

Could you at least compute the difference with the original photo?

leeoniya commented 4 years ago

the results are visually obvious to me. i would run SSIM for you if i was on the fence, but i'm very firmly in the "retaining distinct colors is critical" camp. if you want an objective metric to be convinced, then please run it on the attached images.

if UPNG's primary objective is filesize and compressability, and the target palette is rarely < 256 colors, then the results may very well not matter for this project; you don't market UPNG as a color quantization lib, it just happens to include one as a means to a specific end.

photopea commented 4 years ago

I still think there can be many people finding my quantization nicer than yours. I am very sad, that your conversation moved to arguing about "religion" instead of "science".

leeoniya commented 4 years ago

there's no religion here. i specifically linked to a paper that describes exactly the limitations of partitioning quantizers and methods of solving them, but instead of acknowledging that this is an issue, you prefer to argue why it's not an issue, that everything is subjective and that i need to prove it to you with an automated metric. if you don't visually see the obviousness of the concern from the attached images, then i don't know what further to say. i'm not the first person on earth who's noticed this as a problem, considering there's active research into solving it.

I still think there can be many people finding my quantization nicer than yours.

i encourage you to take a survey amongst your friends, family or Photopea's userbase, rather than asking SSIM. SSIM is useful when a visual inspection is not obvious, and even then does not reflect many facets of human perception and psychology. if i show someone a rainbow and it's missing red, then anyone who've ever seen a rainbow will ask "where's the red?", and no one will say, "look at how nice the blue is!".

photopea commented 4 years ago

I made a GIF, which shows the differences of quantized images with the original image.

I think the goal of quantization is not to produce a beautiful image. The goal of quantization is to produce an image, which is as much similar to the original, as possible, using the limited number of colors. If you decide that "blue should be more bluish" or you artifficially saturate colors, I think it is not what quantization should do.

quant

leeoniya commented 4 years ago

this GIF is a perfect illustration of the problem. the goal i'm trying to achieve is to make sure that no single part of the image has significant, visually obvious errors. diffuse/distributed errors are visually more pleasing than drastic concentrated mistakes. this is not just my opinion, it's why error diffusion dithering works so damn well (when evaluated by humans rather than machines).

if you ask a computer, then you're right - the cumulative sum of all RgbQuant errors is greater than UPNG's. but if you ask a human, then they'll ask UPNG, "i don't care that the grass is greener, where the f** is the yellow?".

RgbQuant aims not to make drastic localized mistakes rather than aiming for algorithmic purity, and this GIF shows that it achieves exactly that.

leeoniya commented 4 years ago

my point is, https://www.hindawi.com/journals/cin/2016/5302957/ shows how you can potentially improve UPNG to avoid this issue, but you're saying there's no issue. so what's left to talk about?

this is not about RgbQuant vs UPNG. it's about what UPNG is today and what UPNG could be in the future.

photopea commented 4 years ago

I do agree, that in UPNG, there is a big error at the flowers. There is definitely a space for improvement. But I still think the result is way better than your library. And you didnt even try to dither the result of UPNG.

leeoniya commented 4 years ago

And you didnt even try to dither the result of UPNG.

dithering does not add colors to a palette, so if my concern is lack of yellow hues, dithering will not fix that. it may help, but yellow is so different from anything in UPNG's palette that the results will be worse than what i know is possible.