GafferHQ / gaffer

Gaffer is a node-based application for lookdev, lighting and automation
http://www.gafferhq.org
BSD 3-Clause "New" or "Revised" License
961 stars 207 forks source link

Gaffer Color Improvements #5215

Closed jedypod closed 1 year ago

jedypod commented 1 year ago

Greetings!

I am Image Engine's new color supervisor. Over the past 5 months, I've had fun learning the pipeline and especially seeing Gaffer used to its full potential.

In that time I've seen a few pain points and areas that could be improved in the realm of color in gaffer. I'll try to formalize that experience into some feature requests. Apologies for the quantity of things in this issue. I debated splitting it up into multiple issues, but I feel that everything I will talk about here is inter-related. So maybe it makes sense to have it all together in one place.

I'll attach an OCIO config I wrote in my own time to this post so that we have a point for discussion: https://mega.nz/file/erZBQCyZ#T73ncYUCQ41XYySJ_p3ta5SF71Iixs5iUo3pHmdGpTk

Also please bear with me if any of my assumptions are wrong, or if I'm missing anything obvious. I am definitely still learning the software!

OCIO Displays and Views

In OCIO we have the capability to define displays representing a viewing condition (a display device with certain characteristics, and a surround illumination condition). Examples of this might be ITU-R Rec.1886 for SDR or ITU-R Rec.2100 for HDR.

We also have the capability to define views representing different View Transforms within each display. How we use these features of OCIO is flexible, but a common scenario is to define a set of views with the same names for multiple displays. One increasingly common example in today's production reality is an SDR and an HDR display.

Gaffer seems to only support a single display: the default (first one in the list of active_displays). It would be great to have access to the display in the viewer. Nuke approaches this by dumping the views of all active displays into the views dropdown. This can get messy fast. A better approach might be to allow selection of the display device. UI real-estate in the viewer is obviously at a premium. Maybe this could be a setting located in the preferences, maybe accessible with one of those cog-wheel buttons near the views dropdown. Or maybe we just have a Color tab in the preferences where this can be configured. (Just brainstorming here, I'm sure you have a better idea for implementation).

OCIO Views

Gaffers approach to showing the active_views is confusing for artists. Gaffer seems to allow the user to define the "Default" view in the preferences using the displayColorSpace.view plug. This is then mapped to the "Default" view in the dropdown.

Many times I've been chatting with lighters, explaining the color pipeline and they ask "And what does the Default view do?" I have to explain that it is not actually a view, but rather a special gaffer thing which is the same as this other view named X, and they do the same thing. Very confusing.

Another confusing thing is that the views are sorted alphabetically instead of being shown in the order defined in the OCIO config. The order is actually incredibly important. We intentionally put similar views next to each other, and there is a significant amount of muscle memory / recognition at work here. When a lighter switches from Nuke back to Gaffer, it would be valuable if the views were displayed in the same order.

I would propose that we just use the views as defined in the OCIO config's active_views field. The first one becomes the default. This is the same behavior as most other DCCs I'm aware of. Consistency would be valuable for the artist here I believe.

Register and UnRegister Views

In our pipeline it would be extremely valuable to be able to control the active_views live in the scene. We might want to have some views shown and other views hidden depending on where we are in the pipeline, and what kind of work is being done.

Daniel Dresser pointed me to this config file, which I believe can be used to register new views, but I believe you can not unregister existing views? (Perhaps there is a way to do this I do not know about). Maybe there's a better implementation for this as well, like having a plug in the scene preferences where you can directly control the active views?

Colorspaces in Gaffer

There's another thing that really confused me about the OCIO nodes in Gaffer at first. The default initial state of OCIO nodes colorspace plugs is None. After chatting to Daniel Dresser about this, I guess the None plug value is a way of indicating that the knob has not yet been set (I may still misunderstand this). So this default plug value doesn't really do what the name implies. Instead it seems like it tries to infer some default behavior. For example, ImageReader nodes, when the colorspace is set to "None", the rules defined in defaultColorSpace.py will be used to try to infer the right colorspace depending on the input image format. However if a ColorSpace node has its inputSpace plug set to "None" and its outputSpace plug set to some value, the node will not do anything.

In Nuke, the un-initialized value of colorspace knobs defaults to the working colorspace defined in the root workingSpaceLUT knob. (Perhaps this design would not translate to Gaffer's architecture).

I guess there two things here: 1). Assuming we need a default "un-initialized" value for these plugs, would it make sense to re-think how this is presented to the user so that it is less confusing? 2). Would it make sense to use some default behavior for un-initialized plugs? For example in the ColorSpace node, I think it would make sense if the default inputSpace plug value defaulted to the working colorspace defined in the defaultColorSpace.py module (default is the scene_linear role).

Colorspace Families

In an OCIO config you can define a family for each colorspace. It is common to use the / token to create groups for different classes of colorspace. This is very valuable for organizing and sorting large OCIO configs.

Gaffer does not seem to support families. It would be really great if we could add this functionality.

Gaffer also sorts the available colorspaces alphabetically. Echoing what I said above in the Views section, order is extremely important. We often group similar colorspaces together in a specific order in the OCIO config. Respecting that order and presenting the colorspaces in the order defined would be better (and more consistent with the behavior of other DCCs).

Expose Default Colorspaces

Speaking of the defaults defined in the defaultColorSpace.py, I was wondering if it might make sense to expose these settings in some root-level scene preferences. It seems like a common thing that artists would want to adjust the behavior of, or at least have some visibility onto what is happening. I admit that I had no idea this configuration was even a thing for many months using Gaffer. Especially exposing the working colorspace (currently this is configured to default to the scene_linear role, but it might very well be the case a user would want to specify a different working colorspace -- obviously still linear but some different gamut).

Roles

I like how Roles are put in their own subfolder. I think it's cleaner to present those alias colorspaces in this way. I would like to make an argument for not changing the names of the roles from how they are defined in the OCIO config though. Currently Gaffer changes the presentation of the role names to be Title Case, with underscore characters converted to spaces. I find this to be confusing. Most other DCCs present the role names as specified in the OCIO config. I think there is an argument for consistency here so as not to confuse people switching between different applications and expecting to see the same thing.

Gamma After View Transform

Currently in Gaffer, the order of operations in the viewer goes like this: 1). Exposure control 2). Gamma control 3). View Transform

I believe we should change it to be as follows: 1). Exposure control 2). View Transform 3). Gamma control

The exposure control is correctly applied in scene-linear. The view transform can be thought of as a compressive transform that takes the infinite inverted pyramid of our scene-linear image data, and compresses it down into the 0-1 box of our display-referred colorspace. Applying the power function in the gamma control after the view transform is much more useful. It behaves more like a "display-referred" exposure adjustment, instead of as a contrast control when the power function is applied in scene-linear. This is the order that is used in Nuke and RV and I believe this is the behavior most artists are used to. Hopefully it would not be too difficult of a change to make!

Summary

I hope these observations and suggestions I've gathered over the last few months are useful! I am of course happy to discuss any of these points further, and I am sure there are modifications to my ideas above that we could make to better fit with the project.

Also if it makes more sense to split any of these out into individual issues, I am happy to do that. Just let me know what makes sense!

Thank you very much for your awesome work and for building such a great tool. Hopefully this helps to improve it a little bit for everyone.

johnhaddon commented 1 year ago

Hi! Thanks for your thoughts @jedypod. Discussing everything in one place initially seems reasonable for now - we can split off separate issues later if they need more detailed discussion or tracking. The main downside of the "mega ticket" from my point of view is its very hard to ever close them, so we might perhaps close this when we think we've made significant enough progress and then leave you to open separate issues for any stragglers.

Most of this all sounds reasonable to me, with the caveat that I'm by no means expert in this area. I'll comment on a few specifics below.

Many times I've been chatting with lighters, explaining the color pipeline and they ask "And what does the Default view do?" I have to explain that it is not actually a view, but rather a special gaffer thing which is the same as this other view named X, and they do the same thing. Very confusing.

As well as the transforms for the Viewer, Gaffer has a global display transform [^1] which applies to the rest of the UI - this is mainly used by colour swatches and colour pickers. That's primarily what you're choosing via the application preferences. And then the assumption was that ideally you'd be able to configure that once and then the "Default" setting in the Viewer wouldn't need to be touched. I guess this was too simplistic, not least because colour pickers probably want a more neutral treatment, without specific "looks" applied? I'm surprised a separate option for the colour-picking space isn't on your list actually, as it's something other folks have brought up before. Perhaps that is the solution though - a separate colour-picking space gives us something we can use as the transform for swatches and pickers, and then since this clearly has a separate purpose, there is no need to provide it as "Default" in the Viewer.

It would be great to have access to the display in the viewer.

One implementation detail here is that we've tried to ensure that Gaffer's core libraries remain agnostic of the colour management solution, so things other than OCIO are an option in future (or now, for anyone willing to configure them). This is a part of our general design philosophy - build modular independent components without too much coupling, and then tie them together only at the level of application configs. In this specific case, startup/gui/ocio.py is the only point at which we actually commit to using OCIO in the GUI - it could be replaced with another config that tied a different system in instead.

Currently that means that we've chosen a very simple coupling for the config, in that you can register as many named end-to-end display transforms as you want, and the Viewer builds a menu from them. To do a nicer UI in the Viewer, we might need to "loft up" the concepts of separate Displays and Views from OCIO into Gaffer. So there's a bit of tension between having a nice OCIO UI and being agnostic of other systems or terminologies. One approach might be for startup/gui/ocio.py to register an exhaustive list of transforms of the form Display/View, and then also register a custom widget for the Viewer so that it gave the appearance of selecting them independently.

When I started writing this, I hadn't considered that ocio.py could also customise the UI, but now I've realised that I think I'm fairly happy that we can implement what you want without changing Gaffer's simple DisplayTransform API too much. But hopefully my prattling on has given you a flavour for how we try to structure things.

Daniel Dresser pointed me to this config file, which I believe can be used to register new views, but I believe you can not unregister existing views?

It shouldn't be a problem to add a mechanism for unregistering those. But perhaps it might be more suitable to deal with this at the UI level, with a mechanism for filtering which views are actually shown in the menus?

Colorspaces in Gaffer

Daniel had already chatted to me about this, and we agreed that it's confusing. None in the ImageReader and None in the ColorSpace node have completely different semantics. The ImageReader is "I will guess this for you", and the ColorSpace node is "I will not do anything until you tell me". We think calling it "Automatic" makes sense for the ImageReader, and defaulting to scene_linear makes sense for the ColorSpace node.

Speaking of the defaults defined in the defaultColorSpace.py, I was wondering if it might make sense to expose these settings in some root-level scene preferences. It seems like a common thing that artists would want to adjust the behavior of

I think there's also a danger in exposing too many things for the average user to fiddle with - colour is very hard to get right, and increasing the number of config permutations massively increases the chance of someone getting it wrong. As an out-of-the-box application there's an argument for exposing more stuff at the user level. But my assumption was that in a large pipeline you'd want one expert to get things right via unseen configs, and leave the user with the minimal number of carefully though-out settings they need to do their job.

exposing the working colorspace

Chatting to Daniel previously, I'd understood this to be one of your more important requests, so I was surprised to see it only really getting a passing mention - I'd been assuming it was the main one till now. Related question (possibly a dumb one), but since OCIO let you define what scene_linear is, why do you want a separate option for the working space?

And one more question, unrelated to any of the above : what should Gaffer's out-of-the-box OCIO config be?

[^1]: As a layman, by "display transform" I simply mean "a function we can give linear data to, and get back data we can send to the display". This is independent of any more precise terms in OCIO or elsewhere - please educate me if I could improve my use of terminology.

jedypod commented 1 year ago

Hi John! Thanks so much for your reply. All of what you said sounds good and reasonable. I'll elaborate on a few of those things below.

Displays/Views/Looks

we've tried to ensure that Gaffer's core libraries remain agnostic of the colour management solution, so things other than OCIO are an option in future

I think this is a great goal.

I'm fairly happy that we can implement what you want without changing Gaffer's simple DisplayTransform API too much.

Sounds good, thank you for explaining, it's helpful!

perhaps it might be more suitable to deal with this at the UI level, with a mechanism for filtering which views are actually shown in the menus?

This sounds great. The goal would be to show the active_views by default, but be able to override this in the application, either by pipeline or by user intervention. I guess it could be as simple as a string plug populated with the active_views, which we could override. (There is probably a better way).

Working Colorspace / Defaults

The ImageReader is "I will guess this for you", and the ColorSpace node is "I will not do anything until you tell me".

Thanks for explaining this! I get it now.

I think there's also a danger in exposing too many things for the average user to fiddle with

Generally speaking, I agree with you about this. However if you'll allow it, I'll make a couple counter-arguments on this topic.

We think calling it "Automatic" makes sense for the ImageReader, and defaulting to scene_linear makes sense for the ColorSpace node.

The colorspace conversions that are applied to images on read and write are fundamental to basic work in gaffer. Both the default colorspaces for different image types, and the working_colorspace, which determines what is the state of color within the scene. I do not believe that obfuscating the operations being applied here is helpful to the user. I do think that a sensible set of defaults is important. I just think we should make it more obvious to the user what is happening, and make it easier to see how those defaults are configured. Allowing these settings to be set correctly by pipeline / people who know what they are doing is definitely valid, but hiding what is happening just adds to the confusion. As I usually say with this type of thing: knowledge is power, and the more artists understand about what is going on the better work they will do.

So in the ImageReader, instead of even having a colorspace named "Automatic" or "None", which tries to determine the best setting, why not just set the colorspace plug to the best setting. This way the user immediately sees what operation is being applied. The "best" setting here would be the colorspace configured as the default for this image type. (Either in the UI in the preferences, or in the defaultColorSpace.py config file).

And in the colorspace node, instead of replacing None with scene_linear, maybe we replace None with whatever colorspace is defined as the working_colorspace, which may or may not be the scene_linear role. (Not a new colorspace or setting, just setting it to whatever the working colorspace is defined as).

BTW there may be very valid counter-arguments here motivated by the needs of gaffer's architecture, which I might not be aware of!

Multiple Working Spaces?

since OCIO let you define what scene_linear is, why do you want a separate option for the working space?

Great question. The answer is complex. In short, we might need different working gamuts for different stages of the pipeline. Right now it's possible to change this in the defaultColorspace config, but (as i understand it), not easy to pipeline this dynamically between different stages of the pipeline.

Medium-length answer: In today's production reality, there is no single gamut representing the "scene". ACEScg attempts to do this and it fails. Why? Because it targets the human observer spectral locus, while the reality is that we work with images captured by a camera with it's own camera observer spectral locus.

I don't want to get too far into the weeds so I'll leave it at that for now. Please ask if you want more info on this!

Color Picking

a separate colour-picking space gives us something we can use as the transform for swatches and pickers, and then since this clearly has a separate purpose, there is no need to provide it as "Default" in the Viewer.

Color-picking did not really come up in the conversations I've had so it didn't make the list! I'm glad you brought it up though.

Just to confirm: currently the color-picking only affects presentation of color swatches and pickers in the UI, not the output values from the pickers or swatches, correct? In Mari for example, this is not the case, and the color_picking setting determines the output color values from the picker (e.g., picking (0.5, 0.5, 0.5) outputs an rgb value of (0.214, 0.214, 0.214) if color_picking is set to the sRGB EOCF). This is a common source of confusion for artists.

For most work that we do in gaffer, I think it makes sense to directly specify rgb values within the working_colorspace. (I believe this is how it works now). However, I could see it being useful to be able to color manage plugs that specify color values.

Maybe we have something like these four settings per scene:

An example, contrived but probably not too far from production reality:
Say we are in a typical vfx pipeline with Texture, Lookdev, Lighting and Comp. We have received plates from the client shot with a Red digital cinema camera. We have decided to set our comp working colorspace to RedWideGamutRGB (linear obviously), because there are many narrow-spectra red led light sources in the shots we have. If we used a smaller gamut not designed for the camera observer spectral locus like ACEScg, large portions of our plates would be out of gamut, and we would have the particularly difficult challenge of doing good comp work on RGB pixels that have one or more negative components.

This show is the third movie in a franchise. And for this movie we must re-use an asset from the first movie. This asset was authored using ACEScg gamut. In a magical world where we had good color management, maybe this would not be a problem. Maybe we could set our lookdev gaffer scene's working colorspace and color picking colorspace to ACEScg. Now we move to lighting. Now we have the challenge taking our asset and putting it into the images of the scene in a convincing way. To do this we must work in the camera native gamut (RedWideGamutRGB), so that we can properly ray-trace in our crappy rgb render engine, with positive tristimulus values. In this magical scenario, all we have to do is change our working colorspace to RedWideGamutRGB (linear), and change our picker colorspace (color plug colorspace?) for our lookdev box to ACEScg, and shazzam, we have changed our scene to have rgb data encoded in RedWideGamutRGB and can load lookdev work done in a different gamut! There are all sorts of holes and complexity in this magical world of course. (How would you implement this type of color management for arbitrary shader nodes, how would you implement color management for textures loaded with osl, how can you find all possible plugs and tools that might author color values, how the heck would you author boxes with their own picker colorspaces, etc etc). But maybe this mini rant gives you an idea of the challenges we face these days with color.

But starting from a simplistic view, maybe in a lookdev scene we want to have our working colorspace be ACEScg, but we don't want to allow picking of colors outside of P3, so we don't stray too far outside Pointer's Gamut, so we can just set our working colorspace to ACEScg, and set our picker colorspace to P3. Then we author rgb values in the picker as P3, and they get converted to the working colorspace. For example we want to author a cyan color. In P3 we specify it as rgb value (0.014, 0.24, 0.38). This color value is color managed, so we know its rgb value and we know the gamut it is stored in (P3). Therefore we know how to convert it into the working gamut. After that 3x3 matrix is applied, we get the appropriate rgb value in ACEScg gamut: (0.08, 0.23, 0.37).

This is definitely a complex topic and I admit probably foraged a bit too deep into the weeds with the above. But hopefully the discussion sheds some light.

Chatting to Daniel previously, I'd understood this to be one of your more important requests, so I was surprised to see it only really getting a passing mention

Yes, as you can see from the above, it's a bit more complex than just the working colorspace. And after realizing that it could actually be configured in the defaultColorspaces.py I realized it wouldn't actually help us that much to have this exposed. I guess it's more of a wishlist item / nice to have thing, not mission critical.

Display Transforms!

As a layman, by "display transform" I simply mean "a function we can give linear data to, and get back data we can send to the display". This is independent of any more precise terms in OCIO or elsewhere - please educate me if I could improve my use of terminology.

Thanks for asking about this! We are definitely lacking in well-defined terms in this area, but let me try to elaborate a bit on this subject.

There are many names for the transformation of scene-linear image data to display-referred image data. (Display Transform, View Transform, Output Transform, Display Rendering Transform, the list goes on). Generally speaking, in simplified terms, this transform consists of two main pieces: "Image rendering", and display encoding.

Display encoding is pretty simple. It just encodes the data correctly for sending to a display device. An example might be encoding image data as Rec.709 gamut, with a 1/2.4 power transfer function, for presentation on a display device calibrated to Rec.1886. (The display forward Electrical-to-Optical Transfer Function or EOTF is a 2.4 power function, so we must encode the image data with the inverse EOTF to get our pixel intensity correctly out of the display as light intensity).

"Image rendering" is not so simple. It is creative and it is technical. It might have a print-film emulation LUT (PFE). It might have arbitrarily complex custom proprietary algorithms for manipulating color appearance. Generally speaking the operation is all about compression. We are compressing a high dynamic range scene-referred input. As linear, this scene-referred image data can be thought of as existing inside an infinitely extending inverted pyramid. The challenge of course is that our display takes display-referred image data, which exists inside of a finite bound: a cube (0-1 in 3 dimensions). So in addition to compressing intensity down, we also need to compress down the color data to fit. It's a non-trivial problem.

So say you have some display-referred image. How do you get back to the scene-referred data? The short answer is: you can't. It's like taking a compressed JPEG image and asking how you get back to the raw file captured with a camera. It's a lossy process. You can get back to an approximation of the scene-referred image state, if you have a LUT +other operations for the forward direction. But it is just an approximation.

You can also pretty easily remove the display-encoding portion of the display transform if you know what the target display device is. This can be useful for certain things.

Hopefully this helps explain...

OCIO Config

what should Gaffer's out-of-the-box OCIO config be?

Great question! Honestly I don't think there are many great options. At least with the current one (I believe gaffer currently ships with spi-vfx?) the filesize is not too big. I've been watching with raised eyebrows as Nuke ships with 830MiB of ACES OCIO configs. Certainly spi-vfx is pretty dated. Many of the colorspaces defined in there are no longer relevant (Who has used Panalog in the last 10 years?). Most non-hobbyist users are probably going to use their own OCIO config though, so not the end of the world. For the hobbyist (I hope we will be seeing more of those with Cycles+Windows), maybe there are better options. The ACESv2 configs at least are not as big. Actually it looks like they are built into the ocio v2 libraries.

Maybe that's an argument for allowing the OCIO config to be set from the UI / preferences instead of only as an environment variable.

All of that said, maybe there would be interest in adding some more color functionality as nodes in Gaffer? For example, just a gamut conversion node and a lin to log with all the common suspects of transfer functions would go a long way. I could probably copy over some other tools I've written to OSLImage code pretty easily.

Sorry this turned into quite the novel. This is what happens when you start talking to me about color stuff...

johnhaddon commented 1 year ago

The goal would be to show the active_views by default, but be able to override this in the application, either by pipeline or by user intervention. I guess it could be as simple as a string plug populated with the active_views, which we could override. (There is probably a better way).

Typically we use metadata for this sort of thing, so I was thinking of a bit of metadata like enabledViews that would default to * but which you could register another value with - like viewA viewC. One nice thing about metadata is that it can be dynamically computed, so you could register a function that would return different things depending on the state of the script. So you might detect that the script was used for lighting, and return a result based on that.

The colorspace conversions that are applied to images on read and write are fundamental to basic work in gaffer. Both the default colorspaces for different image types, and the working_colorspace, which determines what is the state of color within the scene. I do not believe that obfuscating the operations being applied here is helpful to the user.

I'm a bit removed from production these days, and I certainly can't speak for all users, but I think it might be fair to say that a significant number either don't have the same grip on colour theory that you do, or the desire to take the responsibility for it (I would include myself in both those categories). One of the benefits of Gaffer is that the big stuff can be dealt with by senior users and inherited transparently by juniors. Juniors isn't even the right word really - having an eye for making a great image isn't necessarily correlated with an affinity for the finer details of colour management.

As I usually say with this type of thing: knowledge is power, and the more artists understand about what is going on the better work they will do.

A little knowledge is a dangerous thing :)

So in the ImageReader, instead of even having a colorspace named "Automatic" or "None", which tries to determine the best setting, why not just set the colorspace plug to the best setting. This way the user immediately sees what operation is being applied.

That's a great idea, with one modification, because we can't set the colorspace plug to the best setting in the general case. If the filename is a constant, then yes, this can be done when the filename plug is given its value. But the filename might not be constant - it could be computed by an expression, or contain ${contextVariable} substitutions. This makes the filename dynamic, having different values in different contexts, meaning the best colourspace is also dynamic. As an aside, this "different results in different contexts" approach is where Gaffer derives a lot of its power, and fundamentally changes how you might approach even simple things. So for instance, in Nuke, you might make N Read nodes to load N AOVs, and then add and remove nodes as the number of AOVs changes. In Gaffer, you'd have a single ImageReader node that is evaluated in N contexts by a single downstream CollectImages node, and the number of AOVs could vary dynamically without any change to the graph structure (typically using an expression to tell the CollectImages node what contexts to operate in).

So what I think we can do instead is to keep the colorSpace plug's value at "Automatic", but have the UI also tell you what that will evaluate to in the current context. So it might say "Automatic (sRGB)" for a JPEG or "Automatic (linear)" for an EXR. Does that sound OK?

I guess this approach could be summarised as giving the user knowledge (and debuggability), but keeping the power with the folks configuring the pipeline.

I guess it would also be possible to have the behaviour of defaultColorSpace.py be controlled by plugs in either the preferences or the script settings. But let me expand a bit on another reason we didn't do that. If we expose the behaviour only as plugs like colorSpaceFor8BitFiles and colorSpaceForEXRFiles, we've already constrained what you can do - map file types to colorspaces. If you want to embed the colorspace in filenames, which some pipelines do, then you're out of luck, or if you want to use metadata, again you're out of luck. So rather than hardcoding the allowed choices, we allow you to register a function where you have full control. We could also expose the defaultColorSpace.py choices as plugs, but allow your own config to remove those plugs and do your own thing instead. But you'll often find me using this defence : "We've chosen default behaviour based on our principles, but given you extensibility to change things based on your principles".

And in the colorspace node, instead of replacing None with scene_linear, maybe we replace None with whatever colorspace is defined as the working_colorspace, which may or may not be the scene_linear role. (Not a new colorspace or setting, just setting it to whatever the working colorspace is defined as).

The same applies here I think. I haven't thought through the details yet, but it seems likely that a configurable working colorspace would be implemented as a context variable. So the value in the plug will probably be something like ${image:workingSpace}, and the UI would expand that to show you what that means in the current context. So the user might see Working (RedWideGamutRGB) in their context, but actually the node would still be completely dynamic.

BTW there may be very valid counter-arguments here motivated by the needs of gaffer's architecture, which I might not be aware of!

I've rambled on a bit about some relevant details of the architecture above. None of it is a counter-argument really, but I thought you might appreciate the knowledge/power. Actually, one thing you may not be aware of is that it's possible to map Gaffer's context variables to OCIO's context variables. Cinesite use this to have a single OCIO context behave dynamically based on the current shot context in Gaffer.

Medium-length answer: In today's production reality, there is no single gamut representing the "scene". ACEScg attempts to do this and it fails. Why? Because it targets the human observer spectral locus, while the reality is that we work with images captured by a camera with it's own camera observer spectral locus. I don't want to get too far into the weeds so I'll leave it at that for now. Please ask if you want more info on this!

I think medium-length is probably my limit here - thanks for the info!

I guess I will use this as one more opportunity to bang on about Gaffer's design philosophy though. We do want Gaffer to be capable of handling your production reality, but not at the expense of out-of-the-box simplicity, or someone else's production reality. I believe that for many people (myself included) every new colorspace option is just as much an opportunity to get things wrong as it is to get them right. Which is why my personal interest is in finding the sweet spot of simplicity and right-enough to give a good out-of the box experience, while still providing an API for power users.

Just to confirm: currently the color-picking only affects presentation of color swatches and pickers in the UI, not the output values from the pickers or swatches, correct?

Correct. At the API level, we assume that all values stored in plugs are in the working space. The colour picker UI just maps them through a display transform for showing them on screen. The numbers in the colour picker are also in working space, so if you type in 0.5, 0.5, 0.5 those are the values you'll get. I'm not proposing we change that - I'm just proposing a separate control to say which display transform is used.

swatch display colorspace (sets the colorspace used to present ui swatches to the viewer. probably defaults to the view transform)

Why would this be different to the colour picker's transform? I think it would be incredibly confusing to click on a swatch, to find yourself editing what appears to be a different colour in the colour picker.

viewer display colorspace (sets the colorspace used to convert from scene-referred to display referred for presenting images in the viewer)

Would this be a central setting, or also configurable per viewer? If both, then aren't we back to the "Default" confusion?

To have a concrete alternative to talk about, this seems to me to be the most minimal version :

Does that work?

In this magical scenario, all we have to do is change our working colorspace to RedWideGamutRGB (linear), and change our picker colorspace (color plug colorspace?) for our lookdev box to ACEScg, and shazzam, we have changed our scene to have rgb data encoded in RedWideGamutRGB and can load lookdev work done in a different gamut! There are all sorts of holes...

The biggest hole seems to be that the values in the lookdev box are in ACEScg, but they're getting loaded into a script with a different working space. This is one place where I would want to draw the line I think - I don't want to have to augment every colour plug in Gaffer with information about what space it is in, and then automatically transform those values into a different working space on demand. I'd argue that this is best dealt with as an import/export problem, converting the old lookdev into the new world in a one-off step.

Yes, as you can see from the above, it's a bit more complex than just the working colorspace. And after realizing that it could actually be configured in the defaultColorspaces.py I realized it wouldn't actually help us that much to have this exposed. I guess it's more of a wishlist item / nice to have thing, not mission critical.

I think modifying defaultColorSpace.py gets you some of the way, but it still leaves the ColorSpace node dangling without helpful defaults, and it wouldn't be as dynamic as being able to specify the colourspace per .gfr file (or per context)? The other thing we were considering doing if we introduced a working space control, was to use it to automatically configure Arnold to work in that space too. So I think we're reasonably sold on the idea...

Generally speaking, in simplified terms, this transform consists of two main pieces: "Image rendering", and display encoding.

Thanks - that's a helpful way for me to think of it.

I believe gaffer currently ships with spi-vfx?

It's actually nuke-default, which is the same vintage as spi-vfx - they're both in OpenColorIO-Configs-1.0. If I understand correctly, this basically only has the "display encoding" part, and isn't really doing any "image rendering", is that right?

For the hobbyist (I hope we will be seeing more of those with Cycles+Windows), maybe there are better options. The ACESv2 configs at least are not as big. Actually it looks like they are built into the ocio v2 libraries.

Oh, cool, so maybe that's the right default then? I found myself going back down an internet rabbit hole trying to understand the benefits, only to find that it lead right back to you :) So maybe we should be using your transform?

Maybe that's an argument for allowing the OCIO config to be set from the UI / preferences instead of only as an environment variable.

I think the main argument for that is if people have different transforms they want to use in different Gaffer contexts (sequence, shot etc), but which aren't easily implemented in a single OCIO config with OCIO variables providing the differences. It seems like we're shaping up to be making all sorts of OCIO improvements for Gaffer 1.3, so it seems reasonable to throw this in as well.

All of that said, maybe there would be interest in adding some more color functionality as nodes in Gaffer? For example, just a gamut conversion node and a lin to log with all the common suspects of transfer functions would go a long way. I could probably copy over some other tools I've written to OSLImage code pretty easily.

If they fill existing holes and "do one thing and do it well" then they sound like good candidates. As core nodes it would probably be preferable to have them as C++ nodes in GafferImage without the dependency on GafferOSL though...

Sorry this turned into quite the novel. This is what happens when you start talking to me about color stuff...

No worries. This is all stuff I'm dimly aware of existing somewhere out there well outside my area of expertise, so it's useful to have it explained patiently and coherently by someone who knows what they're talking about. In return I have prattled on about contexts and a vague philosophy of as-simple-as-possible-but-then-extensible a bit too much :)

johnhaddon commented 1 year ago

If my last post seems reasonable, then I think the next step is probably to start to put some things into action. There seem to be two broad categories of thing :

  1. UI and configuration niceties. Things like menu ordering, name formatting, evaluation of current "Automatic" colour spaces. These could get drip-fed into 1.2.x releases without causing a problem.
  2. Potentially breaking changes. Things like changing the plugs in the preferences, moving them to the script settings etc. These would need to go into 1.3.

Is it valuable to you to get the first category into 1.2.x, or would it make sense to batch it all up together and say "Gaffer 1.3 is all things OCIO". In case you didn't know already, the other thing we're doing for 1.3 already is to bring OCIO colour management to the 3D viewer.

jedypod commented 1 year ago

So what I think we can do instead is to keep the colorSpace plug's value at "Automatic", but have the UI also tell you what that will evaluate to in the current context. So it might say "Automatic (sRGB)" for a JPEG or "Automatic (linear)" for an EXR. Does that sound OK?

Thanks for explaining a bit more about contexts and dynamic evaluation -- Coming from a Nuke background I'm definitely still wrapping my brain around this. What you suggest sounds like a very good solution and I'm all for it.

"We've chosen default behaviour based on our principles, but given you extensibility to change things based on your principles".

Thanks for explaining this, it does make sense. Certainly we don't want to remove flexibility nor power from the user. I do think it would valuable for a hobbyist-level user to have these settings available in the UI, but for us using gaffer in a big pipeline, configuring these settings through pipeline code and python configuration files is definitely workable. And thanks for explaining about registering a function to control these settings dynamically. It's good to know this is possible with the current design. So yeah in summary, I'll defer to your better judgement on this topic!

I'm just proposing a separate control to say which display transform is used.

This sounds good to me!

Would this be a central setting, or also configurable per viewer? If both, then aren't we back to the "Default" confusion? To have a concrete alternative to talk about, this seems to me to be the most minimal version :

  • A working space setting in the Script Settings.
  • A color picker space setting in the Script Settings.
  • Choice of Display/View per Viewer, in each Viewer itself.

Sorry for the confusion. This would definitely be configurable per viewer, not a central setting. Your proposal sounds good to me. I was just bringing up the color picker topic because in other DCCs (Mari), setting the color picker space sets both the swatch display and the resulting rgb values. I agree with you that it is incredibly confusing. And I'm totally on board with your simple proposal here. For the work we do in Gaffer I think this is fine. One small tweak might be to call it something other than "color picker space", so that we make it clear it is only affecting the presentation of swatches in the UI rather than actually being the colorspace that we pick colors in.

If I understand correctly, this basically only has the "display encoding" part, and isn't really doing any "image rendering", is that right?

Yes, the old nuke-default config only has display encoding, no "image rendering" transforms. Also, I would say it's biggest weakness is it does not manage colorimetry. It is a set of colorspace definitions which define only the transfer function. As you probably know, a colorspace (an informal and poorly defined term), is made up of two aspects: 1). the transfer function, which defines the relationship of the pixel intensity to linear. 2). The gamut, which defines the colorimetry of a pure red, green, and blue color, and the color of the neutral axis. By color here I mean the xy chromaticity coordinate in the CIE 1931 system of colorimetry which we all use. You can't have a colorspace without those two things. linear is not a colorspace! :)

Oh, cool, so maybe that's the right default then? I found myself going back down an internet rabbit hole trying to understand the benefits, only to find that it lead right back to you :) So maybe we should be using your transform?

Ha! You found me out. The OCIO v2 ACES configs would probably be more useful than the old nuke-default config, but it is not without its faults. It would make a reasonable default choice today though, and the added benefit is it would take up zero extra space since we are already using the OCIO v2 libraries. I'm also working on a more minimalist ocio boilerplate config designed for vfx use in my spare time, but obviously it is not yet released.

In return I have prattled on about contexts and a vague philosophy of as-simple-as-possible-but-then-extensible a bit too much :)

Thank you very much for explaining! I have learned quite a few useful things from your prattling! :) - BTW I'm completely in favor of the simple-as-possible design philosophy. Your vigilance in this area is fantastic, and clearly apparent throughout Gaffer's design, so please don't change this!

Is it valuable to you to get the first category into 1.2.x, or would it make sense to batch it all up together and say "Gaffer 1.3 is all things OCIO".

Yes I think this would be valuable to get into a 1.2.x release. Though simple changes, I feel these refinements would have a big positive impact for artists.

This sounds like a good plan! I'm pretty sure you would be better suited for formalizing this big ramble into actionable changes, but if you need any help splitting this out into individual issues, or for anything else at all, please don't hesitate.

I'm excited for the future! Thanks for your patience and talking this through with me!

johnhaddon commented 1 year ago

OK, great. I'll make a start on some of this when I've got my current tasks wrapped up. Here's my attempt at distilling action items out of our conversation so far - feel free to adjust where I've got things wrong :

I'll get in touch to discuss more specific details as things come up.

johnhaddon commented 1 year ago

@masterkeech, would be good if you could cast your eye over this stuff to check that it's all positive from your point of view too.

johnhaddon commented 1 year ago

I stumbled into the OCIO Slack UX channel today, and found this :

The UX working group discussed whether roles should be exposed in application menus and the consensus was that they should not, except in unusual scenarios. To summarize, the decision of what's in the menus should be under the control of the config author, and if they want them to show up, they are free to create color spaces with similar names. Applications should not be over-riding the config author and adding other names to the menu.

Should we remove the Roles submenu completely then @jedypod? I'm not sure I really understand the decision - my limited understanding was that roles were useful because their names and semantics were consistent across configs, so you could configure a transform that would work in any context.

johnhaddon commented 1 year ago

Should we remove the Roles submenu completely then @jedypod?

What I've done for now is remove them by default, but provide control over that on a per-plug basis using metadata in Gaffer.

jedypod commented 1 year ago

Sorry for the delay, just seeing this now.

Should we remove the Roles submenu completely

It is surprising to me that the recommendation from the OCIO folks is to not expose roles at all in DCCs. Maybe there's a good reason I don't understand, but the reality is that we use roles all the time. For example, we use the compositing_linear role for many color transforms in our workflow. The compositing_linear role might be Arri Wide Gamut 3 on one show, or Red Wide Gamut or Sony SGamut3.Cine on another show. Using the role instead of the aliased colorspace allows our workflow to be agnostic from the specific configuration of the show.

I will never argue against configurability, but I would prefer that we keep the default as roles available in a submenu (as it currently is).

johnhaddon commented 1 year ago

I would prefer that we keep the default as roles available in a submenu

Cool. That's how I have it in #5232 now, but with some additional metadata you can use to turn them off on a per-plug basis if you want...