multiSnow / mcomix3

End of Fork
Other
96 stars 39 forks source link

Suggestion: color management support #154

Open wyatt8740 opened 3 years ago

wyatt8740 commented 3 years ago

Might be totally out of scope/too complex, etc., but I thought it might be worth bringing up that mcomix does not currently render pictures with any color management.

That's fine for me personally, since my screens are pretty average, but it might be bad for people with wide-gamut displays. It's also noticeable when some images render differently in Firefox (and eye of gnome, etc.) than they do in mcomix.

At a cursory glance, it looks like PIL might be able to provide what we need via ImageCMS.

Additionally, I think preferences should be provided in the dialog to select a profile, rendering intent, and so forth, since not everyone runs colord.

I'm just throwing it out there as a suggestion; I might even have a go at it at some point, but since no one has mentioned it so far I thought I should bring it up.

I don't know how this would interact with non-Unix platforms, but I'd expect it probably works fine?

sakkamade commented 3 years ago

It's also noticeable when some images render differently in Firefox (and eye of gnome, etc.) than they do in mcomix.

Could you elaborate on this? With real-life examples, if possible.

I can't seem to discern any difference in colour with sRGB images and sRGB screen profile (which is the most basic), whether it is firefox, mcomix3 or geeqie.

wyatt8740 commented 3 years ago

Okay.

So basically, not every monitor can display the same range of colours. For example, my laptop does not cover the entire sRGB space. It only covers a fraction of it (about 51%). With its specific screen profile (created with DisplayCAL, not the generic sRGB one), color managed programs like GIMP can be made to render something closer to how the image actually would look on an sRGB screen. It's not perfect, but still in some circumstances an improvement.

On the left is my laptop (Thinkpad X201 Tablet) showing a piece of pixel art in GIMP (with my custom monitor profile loaded as "monitor profile" in settings), and mcomix showing the same picture on the same laptop.

On the right is my screen which covers 99.6% of sRGB (Dell U2412M) with its own profile loaded.

Since the native gamut of the U2412M is pretty much that of sRGB, there is almost no colour difference between the image in mcomix and in a managed program targeting sRGB. But since my laptop is far, far worse than that, the difference is significantly more noticeable.

DSC_0503_color_profiles

wyatt8740 commented 3 years ago

P.S. - the picture. This is of course an extreme example; lots of pink and purple which are notable as challenging colours for displays.

face2_abx face2_abx_4x

sakkamade commented 3 years ago

Thank you.

Now I see what you meant. You want to be able to load your custom colour profiles in mcomix, eh.

P.s. By eye, on my ancient ASUS VW222S, this image is displayed (almost?) exactly like on right screen from the photo. I use no custom colour profiles and so there is no difference to me. I only adjusted colours via nvidia-settings, and, of course, profile of the screen itself.

wyatt8740 commented 3 years ago

You want to be able to load your custom colour profiles in mcomix, eh.

Yes.

To accomplish this, I think we need to (just going by the documentation):

It's possible that every instance of buildTransform() above should actually be buildTransformFromOpenProfiles. I'm unclear on the difference between them, but suspect buildTransformFromOpenProfiles might have less overhead upon repeated use if we have the input and output profiles already.

I can't tell what panel your ASUS VW222S has, but if it's at all like the VH222S, it's probably got a panel with around 97% of sRGB covered (like the CMO M216H1-L01). So yeah, I'd expect it does look pretty similar to mine, calibrated or no. :)

My U2412M (right monitor) looks very similar before and after calibration; the difference is much more noticeable on the laptop which is farther away from sRGB naturally.

Similar brightness, too, and yours uses a CCFL backlight which was usually what you want on those older screens for good colour quality.

Update: I just tested my basic description above in a python console window and it appears to work.

wyatt8740 commented 3 years ago

Just wrote a small dirty stand-alone program that encompasses much of the needed functionality. It burns an ICC profile onto the input image and writes the resulting bitmap out. I'm sure the logic and control can be improved a lot. It also probably should have a way to choose a rendering intent.

It currently just blindly converts to RGB to do the transformations. I think this should be fine, since images already in RGB just get copied according to the Pillow convert() sources. Might sometimes need to be RGBA instead in mcomix, though? I do not know the best way to determine quickly if an image has an alpha channel or not (unless it's to see if image.mode contains an 'A' in its string).

All resources (tests, etc.) are in the repository I made for it: GitHub/GitLab.

It also handles embedded profiles, and I have tested it on an Adobe RGB jpeg image as well as others to verify.

#! /usr/bin/env python3
import argparse
from io import BytesIO # for loading embedded profiles
from PIL import Image, ImageCms
parser=argparse.ArgumentParser(description='Burns ICC profiles into images.')
parser.add_argument('--icc_in', dest='icc_in', type=str, nargs=1, default=None,
                    help='ICC file to apply to the input. If none given, the embedded profile is used. If there is no embedded profile, then sRGB is automatically assumed.
parser.add_argument('icc_out', metavar='icc_out', type=str, nargs=1,
                    help='ICC file to apply to the output.')
parser.add_argument('input_file', metavar='infile', type=str, nargs=1,
                    help='The file that needs an ICC profile burned into it.')
parser.add_argument('output_file', metavar='outfile', type=str, nargs=1,
                    help='Where to place the picture with the burned-in profile.')

args=parser.parse_args()

icc_in=args.icc_in
if args.icc_in is not None:
    icc_in=args.icc_in[0]

icc_out=args.icc_out[0]
input_file=args.input_file[0]
output_file=args.output_file[0]

input_file=Image.open(input_file)
if input_file is None:
    print("E: Could not open input image.")
    exit(1)

# convert to RGB (convert() will just copy the image if it is already RGB).
if 'A' in input_file.mode:
    input_file=input_file.convert('RGBA')
else:
    input_file=input_file.convert('RGB')

# Choose an input color profile. Default to using sRGB, if none is given and none is embedded.
# --icc_in=<filename> will take precedence over any embedded profiles.
if icc_in is None:
    if input_file.info.get('icc_profile') is not None:
        icc_in=BytesIO(input_file.info.get('icc_profile')) # have to make the icc_profile (a Bytes object) look like a file for Pillow to be happy
    else:
        icc_in=ImageCms.createProfile("sRGB", -1)

xform=ImageCms.buildTransform(icc_in, icc_out,
                              input_file.mode, input_file.mode,
                              ImageCms.INTENT_PERCEPTUAL, 0)

img2=ImageCms.applyTransform(input_file, xform, inPlace=False)
img2.save(output_file)

If you want a profile to oversaturate colors for testing (just to make sure it works), have mine from my laptop (attached). x201tabletprofile.zip

Note that most pictures I find have no profile embedded or have sRGB embedded, so I think it'd be wise to keep a 'transformation' object for sRGB to ready to go in memory (if color management is enabled), instead of recalculating it every time an image is redrawn or loaded (that's the xform variable here).

sakkamade commented 3 years ago

I can't tell what panel your ASUS VW222S has, but if it's at all like the VH222S, it's probably got a panel with around 97% of sRGB covered (like the CMO M216H1-L01). So yeah, I'd expect it does look pretty similar to mine, calibrated or no. :)

It is, somewhat, not by hardware but by eye, but it also is ~10 years of age :stuck_out_tongue_closed_eyes:

Similar brightness, too, and yours uses a CCFL backlight which was usually what you want on those older screens for good colour quality.

Exactly.

But sorry to disappoint you, I understand very few of what you have just stated. Also, even though I wanted to learn those .icc profiles at some point, the reading is just too extensive to cover with my leisure time, neither am I a developed—not python, anyhow—so I cannot be of any help to you with this:

I do not know the best way to determine quickly if an image has an alpha channel or not (unless it's to see if image.mode contains an 'A' in its string).

Although I believe you didn't expect me to.


Regardless, I have just tested your script and it does indeed changes the colours quite well, however I am unable to say whether they are changing as expected, for I don't have my own profile. I'll only mention that this one x201tab.icc over-saturates the image well enough, :smile: and this one u2412m.icc makes the image slightly warmer.

however I am unable to say whether they are changing as expected, for I don't have my own profile.

You can simply take a look at mpv issue tracker (this issue covers most of the question) to understand that this is where one should be cautious. (That the colour profile may be applied incorrectly.)

Even so, some basic colour profiles support is better than none, eh.

sakkamade commented 3 years ago

Well, I decided to spend some more time on it, and installed xiccd for profiles support. (I have moved to Xfce not long ago.)

The profile HV121WX6... makes the white colour cyan-ish, while with your script the image becomes only over-saturated. I wonder why. Whereas U2412M... is more similar to the script output, perhaps even the same, it is rather difficult to compare.

wyatt8740 commented 3 years ago

Well, I decided to spend some more time on it, and installed xiccd for profiles support. (I have moved to Xfce not long ago.)

The profile HV121WX6... makes the white colour cyan-ish, while with your script the image becomes only over-saturated. I wonder why. Whereas U2412M... is more similar to the script output, perhaps even the same, it is rather difficult to compare.

It's because my HV121WX6 (thinkpad laptop screen) has an extremely warm/orange white point in normal use. My Dell U2412M monitor is very close to the correct white point, even without calibration.

My ICC profiles contain both:

The color managed software works in tandem (cooperating with) the hardware lookup table to achieve good results while keeping the colour depth as high as possible.

If you apply the HV121WX6 profile to an image without also loading it on your GPU's lookup table, then the image will be extremely saturated because it is designed to cooperate with the hardware-loaded profile and both are specifically tuned to a screen with poor colour gamut.

Some very expensive monitors let you load a 3D lookup table into them directly; monitors that let you do that mean you don't need to load an ICC profile on your computer at all and the monitor will give you accurate colours everywhere.

Software colour management in GIMP, web browsers, image viewers, etc. is just the "poor man's" way of getting accurate colours (even though colorimeters are expensive, they are much cheaper than such monitors).

Unfortunately, doing things this way means that every program where accurate colours are desired needs to do transformations at the rendering stage so things display correctly on the screen, but others viewing the file will not see the corrections that were tuned specifically for your own monitor.

But sorry to disappoint you, I understand very few of what you have just stated. Also, even though I wanted to learn those .icc profiles at some point, the reading is just too extensive to cover with my leisure time, neither am I a developed—not python, anyhow—so I cannot be of any help to you with this:

Ah, I didn't realize you weren't multiSnow. My mistake! It was very late last night and I was exhausted.

In any case, I didn't really understand calibration at all, either, until a friend let me borrow his for a week to experiment with.

It's a simple premise, but there are so many variables involved that it's hard to explain, and the actual math behind the premise is also very complex and something I don't want to even look at, either. That's why I'm glad there's already a library for it.

I do not know the best way to determine quickly if an image has an alpha channel or not (unless it's to see if image.mode contains an 'A' in its string).

Although I believe you didn't expect me to.

I thought you were someone else… :p

Yeah, you have no reason to know. I think what I did should work, however.

Regardless, I have just tested your script and it does indeed changes the colours quite well, however I am unable to say whether they are changing as expected, for I don't have my own profile.

That is as it should be. Though It's a shame there's no easy way to manually manipulate a computer's colour space. I could have at least gotten closer to correct if I could tweak the gamma ramps non-linearly.

I'll only mention that this one x201tab.icc over-saturates the image well enough, smile and this one u2412m.icc makes the image slightly warmer.

however I am unable to say whether they are changing as expected, for I don't have my own profile.

You can simply take a look at mpv issue tracker (this issue covers most of the question) to understand that this is where one should be cautious. (That the colour profile may be applied incorrectly.)

Even so, some basic colour profiles support is better than none, eh.

Yeah, I know about that. MPV is working with much, much more complicated color spaces, though - videos usually aren't shot in sRGB, but in very different colorspaces (and usually significantly wider ones).

You don't need all of those advanced adjustments for displaying sRGB-encoded images on an sRGB-calibrated computer. I believe that what I have here is adequate.

if you try the Sega Genesis image in my Git repository that I linked above, you'll see an example of transforming an image with a wider colour space to sRGB. The idea of perceptual intent in this case is to make contrast/brightness/intensity "feel" right and appear as close to how it would be on a screen that was capable of displaying the complete gamut.

As in my photo above, you can see the pinks particularly did not stand out as intensely on my laptop screen (in fact, they looked like a subdued bluish purple) without correction as they do on the sRGB screen. The same deal applies with showing Adobe RGB on an sRGB screen, since sRGB is a subset of Adobe RGB in the same way that my laptop's a subset of sRGB.

sakkamade commented 3 years ago

The profile HV121WX6... makes the white colour cyan-ish, while with your script the image becomes only over-saturated. I wonder why.

If you apply the HV121WX6 profile to an image without also loading it on your GPU's lookup table, then the image will be extremely saturated because it is designed to cooperate with the hardware-loaded profile and both are specifically tuned to a screen with poor colour gamut.

I see!

Ah, I didn't realize you weren't multiSnow. My mistake!

Hahaha, that was what I thought to myself :)

MPV is working with much, much more complicated color spaces, though - videos usually aren't shot in sRGB, but in very different colorspaces

Ah, indeed! Videos are surely must be much more complex. Right. :+1:

wyatt8740 commented 3 years ago

@sakkamade @multiSnow

OK, so it has a couple of problems still (it will bug out if you try to give it nonexistant file names in the file chooser, for instance; I need to catch that!) but I now have working code for color correction in MComix. Only thing it doesn't work on is animated GIF's. I think I could probably fix that; not sure how the CPU penalty would be but I think it could be fine. It also currently does not apply to thumbnails to save CPU cycles, but this could be easily changed and I think I might (at least for my own fork).

Took an embarassingly long time (six hours) since I had to re-familiarize myself with the code and figure out a way to make the ICC profile metadata survive PIL Image → GDK PixBuf and GDK PixBuf → PIL Image conversions. But I managed it, eventually. GTK/GDK documentation (or lack thereof) is pretty confusing sometimes.

Right now I'm only correcting colors in the main viewer window; not thumbnails or any other icons in the program. This is better for performance, anyway. Also read the commit message if you want to see justifications for a couple of design choices.

Once I've caught some exceptions and tested it for a while, and cleaned up the junk (extraneous comments and such), I'll probably make a PR. For now I thought I'd give a heads-up, though.

The code is on my fork, in the 'colormanaged' branch.

Update: also seems to fail on some images with large color profiles, even though my stand-alone program does not have issues. Investigating why now.

Update, again: Once again it's Gnome's fault. Apparently there's a 4KB limit on the metadata buffer for pixbufs and some ICC profiles are bigger than that. So they get silently truncated.

I need to find some other way to propagate the profile through the transformations, ideally without any major refactoring. And I have absolutely no idea how I should do that since I can't seem to figure out the complete path from image loading to the screen. Wondering if we need separate pil_to_pixbuf and pixbuf_to_pil variants that return tuples, or something.

I thought pixbuf metadata was perfect, but it does fail on images with huge profiles using my current method (which was base64 encoding).

Edit one more time:

I think I figured it out, and it was easy. Use setattr instead of set_option(). I'll come back and patch things up after I sleep this time, but I pushed that fix commit. Still have to fix the file chooser and probably also see about pre-calculating a few common transformation matrices/caching recently used embedded ones for speed.

multiSnow commented 3 years ago

@wyatt8740

The reason you find it not works with animated GIF is that the animated GIF is forced to load only by gdk-pixbuf, according to the still buggy gif plugin of Pillow with animation. (See https://github.com/python-pillow/Pillow/labels/GIF. The fallback code is in 'image_tools.py', function name is 'load_animation')

I know nothing about color management. Maybe it is just like another filter? If so, I prefer to do it only by the PIL module at the image file loading. Gdk-pixbuf is, and should be only used to show something in widget, while let PIL module to do the excellent works with the image, IMHO.

wyatt8740 commented 3 years ago

@wyatt8740

The reason you find it not works with animated GIF is that the animated GIF is forced to load only by gdk-pixbuf, according to the still buggy gif plugin of Pillow with animation. (See https://github.com/python-pillow/Pillow/labels/GIF. The fallback code is in 'image_tools.py', function name is 'load_animation')

I know nothing about color management. Maybe it is just like another filter? If so, I prefer to do it only by the PIL module at the image file loading. Gdk-pixbuf is, and should be only used to show something in widget, while let PIL module to do the excellent works with the image, IMHO.

Yes, I saw that in the code and knew why it didn't work with GIF's, I just was mentioning that it did not.

Yes, technically the color correction stage is sort of like a filter. The problem is that you need to do the filtering after the resampling or you can introduce artifacts/false colours (how bad it is depends on the resampling algorithm). This is especially apparent in images with partial transparencies and gradients, in my opinion. It is supposed to be the last stage between any image processing and the rendering to a screen.

I am using PIL for it. I think however that the the entire rectangle resize function could use rewriting to just use PIL Images consistently throughout and then return a pixbuf at the very end.

Currently, the fork works, but today I have to move to a new apartment. I can come back in a while to try to make it nicer.

wyatt8740 commented 3 years ago

Thought: I might want to reconsider this and additionally have MComix obey the _ICC_PROFILE x server atom, if present (and running on X). This way it'd work as expected even via X forwarding.

Will have to figure something out. Anyway, letting you know I'm still working on this, although less intensely this weekend. If I can get it to not needlessly recalculate the matrix on every image (and thus more performant on my powerbook) I will consider it a win.

wyatt8740 commented 3 years ago

I've been doing more tweaks to my personal derivative version and vastly improved performance by keeping the transformation matrices for the two most common kinds of images that don't have an embedded profile (RGB and RGBA).

Unfortunately I've been doing this using globals, which I assume is a no-no. So I'll still have to find a way to get things cleaner before I think about doing a PR.

multiSnow commented 3 years ago

You don't need to be rush to complete this feature.

Maybe you don't yet notice that I created a new branch named 'mage', which will take the place of existing 'image_tools.py' to do all process by PIL and only convert to pixbuf just before user want to see it.

I will be glad if you can make the function of color management as simple as 'apply_icc(im)->im', so that I can easily put it at the right place.

wyatt8740 commented 3 years ago

Ooh, I will take a peek at that! A PIL-based image_tools.py sounds perfect for me. Thanks!