sobotka / olive

NLE video editor
GNU General Public License v3.0
2 stars 1 forks source link

[PIXELS] Color management is inaccurate in the CI-generated AppImage #141

Closed itsmattkc closed 3 years ago

itsmattkc commented 4 years ago

I've confirmed that with the current AppImage, exported image color can be severely inaccurate. This appears to affects export specifically (which is always CPU path) and does not appear to be related to GPU inaccuracies we've seen in the past.

This appears to only affect the AppImage - compiling manually on Arch does not exhibit this issue (nor do Windows or macOS builds). This is likely due to the usage of extremely old versions of OpenImageIO and/or OpenColorIO on the CI (1.6 and 1.0.9 respectively) and should therefore be fixed once the new CI (#126) is ready (since we'll be using the VFX reference platform versions which are significantly newer), but I'm posting this just as a reference for a potentially severe issue.

Original image: image

Exported image from AppImage: image

sobotka commented 4 years ago

I am unsure this is a bug.

The reason is that in this case, the image is pre-baked for display referred output. That is, opaque to the decoding, we have assumed nonlinearities built into the encoding that we do not have enough information to peel apart back to radiometric. In this specific case, the radiometric values of the icon are display referred. That means that when the proper encoding is assigned to the buffer, the best we can do is "decode" it back to the display linear range, which ends up slugged into the mixing working space directly from the encoding assumptions. That is, 0.0 in the encoding maps to radiometric 0.0, and 1.0 maps to 1.0.

When we consider footage, we are viewing the footage, as radiometric-like emissions, under a rendering transform. This is a high level rendering transform that takes a wide range of radiometric-like values. So if we had three pieces of the Alexa footage, and we try to integrate this display referred image, it hopefully reveals a problem; we can't have the radiometric domain rendering a wide radiometric-like range of values and have the arbitrary display linear value 1.0 correspond here.

For applying lower thirds, and other motion graphics / display referred encodings to radiometric-like footage, it requires granular control of the image state. In this case, if we wanted our blue lemons to be applied as a lower thirds icon on our Alexa footage, the proper chain is to:

  1. Take the Alexa footage out through the rendering transform. The rendering transform may have sophisticated gamut mapping and other considerations baked into the overall transform.
  2. Once we are through the rendering transform, we would ideally be in the display linear domain. This may also vary, given that the display linear ranges would be different between say, SDR for sRGB and EDR ranges such as a 1000 nit display.
  3. Composite on the relevant display linear result.
  4. Render out through the final rendering transform that would bake in the transfer function, and potentially apply a display referred gamut mapping where required etc.

If this is referring to the generic sRGB-like display output, then yes it is indeed an issue, as the twin decoding back to encoding should result in a no operation.

itsmattkc commented 4 years ago

Yes this result is from a simple sRGB input -> reference -> sRGB output transformation, and I think the most telling part is that this only seems to occur on the Linux AppImage that's built with significantly older dependencies - every other build (appears to) provide a no-op result. Clearly one of these results is a bug and I'm guessing it's not the no-op.

sobotka commented 4 years ago

Ah! My bad then.

I don’t know what would cause that. Sounds like further reason to migrate to OCIO V2.