Closed WestLangley closed 4 years ago
Actually, before we go down that path, perhaps we should decide on decoupling "encoding" into color space and encoding format, as suggested by @bhouston.
If we split color space conversion from the encodings, we could do the above. It would be more flexible and clear. We could also support LUTs in the color space conversions as well.
And then each texture and render target should have both a ColorSpace (Linear, sRGB, AdobeRGB, Gamma20, Gamma22) and a Encoding (None, RGBE, RGBM, RGBD, LogLUV, CMYK) parameter.
This is something I would support.
Actually, before we go down that path, perhaps we should decide on decoupling "encoding" into color space and encoding format, as suggested by @bhouston.
Separating colorSpace and encoding is not that hard. I would just copy the code used to inject functions for encoding into the glsl and modify it to inject colorSpace transforms. And then split the enums into a color space list and an encoding list. I guess I am saying just use the existing design for encodings, but duplicate it again for color spaces.
I think that is sort of the easy, obvious solution.
The injectors are pretty simple - they just create specially named functions that map onto the chosen encoding function:
https://github.com/mrdoob/three.js/blob/dev/src/renderers/webgl/WebGLProgram.js#L12
I use the same injection idea for tonemapping:
https://github.com/mrdoob/three.js/blob/dev/src/renderers/webgl/WebGLProgram.js#L51
Hmm... I wonder if tone mapping could be combined with color spaces? They are sort of the same thing theoretically -- a color transform. And they sort of happen at the same time in the renderer pipeline (before the encoding.)
And then moved from a global on WebGLRenderer to something that is texture specific (except we would use the global colorSpace / toneMapping operator when one doesn't have a render target, in the same fashion that WebGLRenderer.outputEncoding would be used.)
I have run into issues with the global tone mapping operator -- and as it is currently designed it is annoying to disable and re-enable. And I think a per texture colorspace/tonemapper may be more useful, and impose no limitations that I can think of.
Is there precedent for considering tonemapping a form of color space conversion? I wonder because it would justify combing them.
I think that the order maybe:
toneMapping (gamut reduction) colorSpace (gamut transforms, assumes inputs are between 0-1) encoding (assumes inputs are between 0-1)
I guess you may want to use a Reinhart tone map operator may or may not want sRGB output -- thus that argues for a separation of toneMapping from colorSpace.
Another complicating is that HDR TVs and Monitors are coming and use different gamma curves such as this one: https://en.wikipedia.org/wiki/Rec._2020
To summarize:
toneMapping ( to LDR )
colorSpace ( Linear, sRGB, AdobeRGB, Gamma20, Gamma22, Grayscale, CustomLookup )
encoding ( RGBA, RGBE, RGBM, RGBD, LogLUV, CMYK )
This means all textures and render-target textures have the following two properties:
texture.colorSpace = THREE.sRGBColorSpace; // for example
texture.encoding = THREE.NoEncoding; // is THREE.RGBAEncoding a better term ???
So instead of adding renderer.outputEncoding
, we would add:
renderer.colorSpace = THREE.sRGBColorSpace; // THREE.LinearColorSpace, etc..
// renderer.encoding = THREE.RGBAEncoding; // I guess that is not needed
that argues for a separation of toneMapping from colorSpace.
Yes.
Yeah. Just to be clear I see this as an eventual end point:
texture.toneMapping
texture.colorSpace
texture.encoding
// and for the case of the null renderTarget, rendering to screen (which is sort of an annoying exception):
renderer.toneMapping
renderer.colorSpace
// renderer.encoding - this should not need to be set, always RGBAEncoding or NoEncoding.
I guess putting these colorSpace/toneMapping/encodings on textures/renderTargets could get weird when one reuses them in the EffectComposer -- you need to be careful what you set because we recycle readBuffer, writeBuffer as we step through the composer. I guess we already have that problem, so maybe it makes no real issue right now.
Actually you are right, no encoding needed for renderer. It should always be RGBAEncoding.
texture.toneMapping
does not sound right to me...
I am now thinking we need to separate further:
toneMapping ( HDR to LDR )
colorMapping ( to Grayscale, to CustomLookupTable )
colorSpace ( Linear, sRGB, AdobeRGB, Gamma20, Gamma22 )
encoding ( RGBA, RGBE, RGBM, RGBD, LogLUV, CMYK )
Tone mapping in this case is really a property of a render target. The renderer is a stand-in for the default render target:
renderer.toneMapping (when rendering to the screen)
renderTarget.toneMapping
Likewise, color mapping: it is essentially a post-processing effect, but applied as-we-go.
renderer.colorMapping (when rendering to the screen)
renderTarget.colorMapping
Color space is a property of a texture, render target texture, and renderer output.
texture.colorSpace
renderTarget.texture.colorSpace
renderer.colorSpace
Encoding is a property of a texture and a render target texture.
texture.encoding
renderTarget.texture.encoding
So, to summarize
renderer.toneMapping
renderer.colorMapping
renderer.colorSpace
//renderer.encoding - not needed; always RGBA
renderTarget.toneMapping
renderTarget.colorMapping
texture.colorSpace
texture.encoding
That's the way I see it... for now. :)
Well, we could view this as a list of inline post processing transforms. Encoding we run through the set forward with forward transforms, decoding we go backwards with inverse transforms.
Such an interface would look like:
texture.texelOps.add( THREE.FilmicToneMapping, { exposure: exposure, whitePoint: whitePoint } );
texture.texelOps.add( THREE.sRGBColorSpace );
texture.texelOps.add( THREE.RGBMEncoding, { multiplier: 16 } );
This reduces it to a single list of transforms/operators, and you can have an arbitrary number. Implementation would be simpler and it would be more flexible going forward. Maybe each operator could be a class that contains its own code injection method so it is better encapsulated and infinitely extensible by others -- rather than hard coded enums on THREE.
I've never seen this design paradigm before. That doesn't mean it is wrong, but it makes me not super confident in pushing it. But if others agree....
The texel op class would look like this -- this is the default that doesn't do anything. You would override it with better data.
class TexelOp {
fowardOp: function( name, parameters ) {
return "vec4 " + name + "( vec4 input ) { return input; }";
}
backwardOp: function( name, parameters ) {
return "vec4 " + name + "( vec4 input ) { return input; }";
}
}
@bhouston I like that approach!
It may help avoid surprises for new users if we changed the default renderer.gammaOutput=true
, or whatever the equivalent outputEncoding
or texelOp
syntax ends up being... Is it worthwhile to do that, and deprecate gammaInput
, before we implement a more flexible interface here?
GLTFLoader automatically sets material.map.encoding = THREE.sRGBEncoding
(required by glTF, and likely to be generally correct for other formats) and assumes the user has set gammaOutput=true
. Frequently, users find this confusing.
I believe that renderer.gammaOutput should be deprecated and instead use renderer.outputEncoding instead.
Right. See point 2 in original post.
If we're renaming it anyway, why not renderer.colorSpace
or renderer.outputColorSpace
per https://github.com/mrdoob/three.js/issues/11337#issuecomment-301599802? It seems like encoding is out of place, and toneMapping
can be added later.
Basically start by implementing colorSpace (output/encode) on WebGLRenderTarget and then proxy that to WebGLRenderer. I would suggest adding colorSpace (input/decode) as well to WebGLTexture.
It was my mistake to combine both colorSpace and encodings when implementing encodings. They should have been separate.
BTW I am looking at a lot at 3D LUTs recently for colorMappings, they can produce pretty beautiful results.
BTW I am looking at a lot at 3D LUTs recently for colorMappings, they can produce pretty beautiful results.
links? :-)
Uses color mapping: https://wizgrav.github.io/aframe-effects/ https://github.com/wizgrav/aframe-effects/tree/master/components/colors
Originally based on this demo, which has disappeared unfortunately: https://twitter.com/mattdesl/status/712043209652903936?lang=en
Color mapping is also known as "color grading" in the film industry. Good introductory video: https://www.youtube.com/watch?v=4sblEu4x5ug
Without 3D LUT support in Three.JS we can not get the "cinematic look" without doing it the hard way using lights and materials. When really you can just achieve a gritty and emotional look via a 3D LUT on top of a normal looking scene.
From the point of view of a graphic designer / artist, this sounds great! Three.js needs a more powerful color management workflow. I found it very confusing that gammaOutput = true would show the true colors of my textures (after correctly setting the gamma value to 2.2) while "native" colors remain in a linear color space and get incorrectly converted to my target gamma.
This line looks weird to me though:
colorSpace ( Linear, sRGB, AdobeRGB, Gamma20, Gamma22 )
This is mixing a type of color space (Linear) with specific color spaces/profiles (sRGB, AdobeRGB) with specific gammas (Gamma20, Gamma22). Also, which linear color space would this conform to? Three.js's own, non-color corrected interpretation of linear, or a color corrected one aligned with the "ACEScg" profile (https://en.wikipedia.org/wiki/Academy_Color_Encoding_System)?
I would separate this in 3: the type of the color space (linear or gamma corrected), the target gamma (if a gamma corrected space is set), and the color space model (sRGB, AdobeRGB, Linear ACEScg, etc...). Then, if only a type is set, it'd be nice if three.js assumed good enough standards for the non-professional / begginer (if gamma corrected: 2.2 gamma, sRGB; if linear: Linear ACEScg). If no type is set, I'd default to gamma corrected — while linear is the accurate one, most beginners don't even know what a color space is, and professionals could simply set three.js to work in linear, globally.
A good color management workflow to follow would be that of Adobe: set a working space for your scene, and then offer the possibility to override it or convert to it in an asset-by-asset case.
Oh, and color mapping via 3D LUTs would be awesome as well 😍
@zomboh Can you mix different gamma with sRGB and AdobeRGB? My understanding is that you can not. Gamma2.2 is sort of a poor man's sRGB?
From the sRGB wikipedia article you can see that sRGB includes a gamma curve: https://en.wikipedia.org/wiki/SRGB
"sRGB uses the ITU-R BT.709 primaries, the same as in studio monitors and HDTV,[2] a transfer function (gamma curve) typical of CRTs, and a viewing environment designed to match typical home and office viewing conditions. This specification allowed sRGB to be directly displayed on typical CRT monitors of the time, which greatly aided its acceptance."
So I believe that sRGB should not be combined with another gamma. You will apply double gamma correction.
In the Adobe RGB wikipedia article: https://en.wikipedia.org/wiki/Adobe_RGB_color_space
"As with sRGB, the RGB component values in Adobe RGB (1998) are not proportional to the luminances. Rather, a gamma of 2.2 is assumed, without the linear segment near zero that is present in sRGB. The precise gamma value is 563/256, or 2.19921875. In coverage of the CIE 1931 color space the Adobe RGB (1998) color space covers 52.1%"
Thus I believe that AdobeRGB, sRGB and Gamma2 and Gamma22 are color spaces and they should be mutually exclusive.
I can see us adding ACES color standards are based in the HDTV/4K standard Rec._2020 which itself defines a gamma correction function to its linear colors:
https://en.wikipedia.org/wiki/Rec._2020#Transfer_characteristics
I am still a bit confused as to whether Rec 2020 is natively linear as stated in the ACES article or whether it is gamma corrected natively. But either way, I believe that it is mutually exclusive with the other transforms. You get to pick one, not a mix and match approach.
@bhouston you're right! I stand corrected. I think I mixed up the gamma with the white point since you can target both to an arbitrary value in a calibration, but in the case of a color space there is a fixed gamma value, indeed. Thanks for the clarification!
I'm strongly considering a change to how A-Frame (http://aframe.io) handles color, based on the two "workflows" in Unity. Essentially:
workflow=gamma
— same as current three.js defaults, except that texture.encoding
is ignored. Colors and textures are used as they are stored, even though shader calculations treat inputs as if they were linear space. Output is not gamma-corrected.workflow=linear
— all colors inputs are assumed to be sRGB, and converted to linear when passed to the GPU; textures are interpreted according to texture.encoding
, and converted to linear in the shader. Output is gamma-corrected (unless you opt out for post-processing).Mentioning it here because I think this syntax may be easier for users to understand than input/output gamma, encoding, and colorspace. In either workflow, input and output conversions are managed for you. Input material.color.setHex( 0x4285f4 )
will look the same as #4285f4
in CSS, assuming full lighting. That is not true of three.js when gammaOutput: true
.
It could be worth considering a similar "workflow" option in three.js. Even if we can't simplify it down to a single setting like that, documenting the two workflows and the specific settings needed to achieve them would help developers.
@donmccurdy I am inclined to agree with you.
However, in your Unity link, I was surprised to read:
These inputs are then passed to the Shader, with lighting calculations taking place in linear space as they normally do.
That sounds different than "treated as if they were in linear space"...
I never liked the fact that colors in three.js are assumed to be in linear space, as this conflicts with CSS, where they are in sRGB space.
Even if we can't simplify it down to a single setting like that, documenting the two workflows and the specific settings needed to achieve them would help developers.
I agree - reading discussions here and on the forum, it seems like color spaces and texture encoding are a major point of confusion for users at the moment, beginners and experts alike.
Clearly documented workflows would be a good starting point for improvements to the API.
These inputs are then passed to the Shader, with lighting calculations taking place in linear space as they normally do.
That sounds different than "treated as if they were in linear space"...
I don't follow what you mean here, or for that matter what the Unity docs refer to with "as they normally do." The Unity quote is from the "linear" workflow, whereas your quote from my post is describing the "gamma" workflow. But I think the gist (in the preferred linear workflow) would be that when the user selects a color in the Unity Editor, gamma correction is removed when creating the GPU constants. In effect that shields the user from having to worry about all of this. 😅
I am not sure we should support this weird gamma mode. It is just simply wrong and I do not see a reason to support it except that Unity has also implemented a "wrong" mode.
The future is purely linear workflows. Now I am open to improving its usefulness and to remove surprises but I think encouraging a clearly wrong mode is not the answer.
I agree - reading discussions here and on the forum, it seems like color spaces and texture encoding are a major point of confusion for users at the moment, beginners and experts alike.
I think this can be solved with documentation and maybe a chart or something. We should not muddy the waters by having a mode that ignores texture.encoding, that is just wrong. I would only suggest something like that for backwards compatibility sake.
I never liked the fact that colors in three.js are assumed to be in linear space, as this conflicts with CSS, where they are in sRGB space.
We should maybe have a table that says the encoding of each parameter. I am okay with changing the color space of the color specification variables to what is most standard in other tools (Unity, UE4, Maya, 3DS Max, V-Ray, etc.)
I agree that the gamma mode has no benefit other than backwards compatibility, but unfortunately that's a major sticking point... If we were to introduce a linear vs gamma modes to three.js, probably we should keep gamma mode exactly the same as the current defaults, rather than ignoring texture.encoding
as Unity does.
I'd be even happier with simply going entirely to a linear workflow — I agree linear is the preferred direction. If we do the following, existing scenes would have a (hopefully) smooth update:
THREE.Color
at GPU uploadUnfortunately (3) is hard. Most users are not setting texture.encoding
today, and most loaders do not set texture.encoding
on their textures either, so everything uses the default THREE.LinearEncoding
. We'd have to ignore that setting, or users would need to start setting encoding on all of their textures when they update. A possible workaround would be to introduce...
texture.encoding = THREE.DefaultEncoding
... which leaves the renderer free to infer an encoding from the workflow, format, and map slot. I hope that is a reasonable API. We'd probably want to do 1-3 all in a single release.
(1) default to remove gamma correction from THREE.Color at GPU upload
Remove gamma or remove sRGB. Those are slightly different. I am not sure this needs to be done but it is a choice.
(2) default to apply gammaOutput=true (or whatever we rename it to)
gammaOutput=true is there for backwards compatibility. We really should have renderer.outputColorSpace=sRGB instead.
(3) default to consider color maps sRGB
This is acceptable in the case of LDR maps, where as HDR maps should be assumed to be linear.
texture.colorSpace = sRGB or Linear or Gamma22 texture.texelEncoding = Raw, RGBM16, RGBD, RGBM8, LogLuv
colorSpace and texelEncoding should be separated. They are currently conflated and this leads to confusion.
Remove gamma or remove sRGB. Those are slightly different. I am not sure this needs to be done but it is a choice.
If it isn't done, (1) enabling the linear workflow above will put colors and textures out of sync in existing scenes, (2) matching colors in CSS and three.js will be harder, and (3) it seems inconsistent with color-managed tools like Blender and Unity Editor.
Agreed on the rest of your comments. If we split colorSpace/texelEncoding, something like texture.colorSpace = THREE.DefaultColorSpace
(as default) would give us the flexibility to make good base assumptions about LDR vs HDR maps.
Call it AutoColorSpace rather than default as auto implies it switches smartly based on context. I am fine changing the color space for material parameters.
On Wed, Nov 21, 2018, 3:26 PM Don McCurdy <notifications@github.com wrote:
Remove gamma or remove sRGB. Those are slightly different. I am not sure this needs to be done but it is a choice.
If it isn't done, (1) enabling the linear workflow above will put colors and textures out of sync in existing scenes, (2) matching colors in CSS and three.js will be harder, and (3) it seems inconsistent with color-managed tools like Blender and Unity Editor.
Agreed on the rest of your comments. If we split colorSpace/texelEncoding, something like texture.colorSpace = THREE.DefaultColorSpace (as default) would give us the flexibility to make good base assumptions about LDR vs HDR maps.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mrdoob/three.js/issues/11337#issuecomment-440798461, or mute the thread https://github.com/notifications/unsubscribe-auth/AAj6_SNynsY13VDGnz9ky0z8i-1knGQIks5uxbbcgaJpZM4NbZ6F .
I thought to stress my preference for Filmic. This video is comprehensive info about having high dynamic range. https://youtu.be/m9AT7H4GGrA I realize realtime GPU may not be optimal performance (FPS) for high bitdepth but as GL moves forward with realism, particularly in Blender with Eevee I felt those building ThreeJS might appreciate a reminder of the potential and the rising bar. https://youtu.be/gy4E9nc5m0E
Anyway I guess for clarity the point is you cant get photoreal (realtime) interactive without high dynamic range. If content hdr light maps textures models etc. are best coming from Blender, then using the 3rd party filmic encoding is the right way in that camp.
Anyway I'm hopeful that working with filmic encoding in Blender with Eevee will translate well to three.js via web, and just want people to be mindful of those ideals. I suspect it might apply only to exporting or something, but I suspect you guys ^ understand that better then I think I do.
The way hdr is implemented in three.js means it costs almost nothing because it uses rgba8 for storage. I agree that filmic tone mapping on by default is best.
On Thu, Nov 22, 2018, 1:43 AM Master James <notifications@github.com wrote:
I thought to stress my preference for Filmic. This video is comprehensive info about having high dynamic range. https://youtu.be/m9AT7H4GGrA I realize realtime GPU may not be optimal performance (FPS) for high bitdepth but as GL moves forward with realism. As the future moves forward in Blender with Eevee I felt those building ThreeJS might appreciate a reminder of the potential and the rising bar. https://youtu.be/gy4E9nc5m0E
Anyway I guess for clarity the point is you cant get photoreal (realtime) interactive without high dynamic range. If content hdr light maps textures models etc. are best coming from Blender, then using the 3rd party filmic encoding is the right way in that camp.
Anyway I'm hopeful that working in Blender with Eevee will translate well to three.js via web, and just want people to be mindful of those ideals.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mrdoob/three.js/issues/11337#issuecomment-440928582, or mute the thread https://github.com/notifications/unsubscribe-auth/AAj6_fLrF7vGp-Sv67AeJwL3sWJIT-TYks5uxkeIgaJpZM4NbZ6F .
I agree that the gamma mode has no benefit other than backwards compatibility
https://github.com/mrdoob/three.js/issues/15100#issuecomment-431818277 may be a counter-argument...
Here are early results from A-Frame scenes, switching to a linear workflow. Changes:
(1) color inputs are assumed sRGB (2) color textures are assumed sRGB (3) gammaOutput on by default
The overall trend, at least for these examples, appears to be that shading is slightly less pronounced. The differences are definitely more acceptable for us than changing default values for any of those settings alone would have been.
# | gamma | linear |
---|---|---|
1 | ||
2 | ||
3 | ||
4 | ||
5 | ||
6 |
Just remember one of the reasons for the slight difference is that sRGB to linear and then linear to gamma 2.0 won't restore the inputs. The last encode should be to sRGB instead.
Here's another comparison displaying possible corrections to threejs color management. In the links below (and this branch) I've applied some significant changes:
gammaInput
texture.encoding
to null
Color.colorManagement
, default=true, which causes user hexadecimal and HSL color inputs to be treated as sRGB. The getter/setter methods do this conversion, so color.toArray()
and color.r/g/b
are linear.renderer.colorManagement
, default=true, which causes color maps to be treated as sRGB, unless otherwise specified.renderer.linearOutput
, default=false, which disables linear->sRGB encoding of output (e.g. for use with postprocessing).I'm not confident that's the right user API, but the end result should be correct. The effect is noticeable – especially so on examples that weren't configured quite correctly before, such as enabling gammaOutput=true
but not marking color textures as sRGB. I haven't attempted to fix any of the examples in the comparisons below.
Demo (synced to r101)
gamma (before) | linear (after) |
---|---|
@donmccurdy This looks great, although I agree we should proceed with caution on this. It will make things much simpler if we can get this API perfect on the first go and not have to change things later.
From my perspective, here's what I'd like to see from three.js color management:
be completely invisible to users that don't know anything about color spaces and texture encoding, and create correct output for normal workflows. That is, for the 99% of users currently using sRGB colors (such as CSS hex values) as Color input, and using PNGs or JPGs for textures, things should just work.
be flexible enough for people who do understand this to be able to change things to their liking. Perhaps we should spend some time identifying what people in this group actually need, to see whether your proposed API is sufficient.
- add Color.colorManagement, default=true, which causes user hexadecimal and HSL color inputs to be treated as sRGB. The getter/setter methods do this conversion, so color.toArray() and color.r/g/b are linear.
This is especially useful. At the moment, it's neccessary to do this on every material created if you want correct colors:
const material = new THREE.MeshStandardMaterial( { color: 0xff0000 } );
material.color.convertSRGBToLinear();
- add renderer.colorManagement, default=true, which causes color maps to be treated as sRGB, unless otherwise specified.
This is also great. It's currently neccessary to either set renderer.gammaInput = true
, or do this for every .map
, .emissiveMap
and .envMap
(at least the LDR ones):
const texture = textureLoader.load( url );
texture.encoding = THREE.sRGBEncoding;
The original tasks define in https://github.com/mrdoob/three.js/issues/11337#issue-228778022 are implemented via #18108 and #18127.
The issue also discussed the idea to decouple encoding
into color space and encoding format. If nobody disagrees, I suggest to move this topic to a new issue (if it's still relevant).
It's still relevant, but not essential yet. I'll take care of it.
It is OK to close this.
As discussed in #11332
renderer.gammaInput
needs to be deprecated. We now supporttexture.encoding
instead.renderer.gammaOutput needs to be deprecated and replaced with
renderer.outputEncoding
when rendering to the screen. We already supportrenderTarget.texture.encoding
when rendering to a render target.WebGLPrograms.getProgramCode() needs to be updated to respond to changes in
renderer.outputEncoding
. Settingmaterial.needsUpdate = true
will then force the change.Legacy warnings need to be added, and the examples need to be updated to accommodate this change.