Closed ryankaplan closed 11 months ago
Hi Ryan,
This is a good summary created by ChatGPT:
The Kubelka-Munk theory is a mathematical model, published in 1931, that describes the interaction of light with a homogeneous, particulate material. This theory is often used in color science, especially in the field of color measurement and color reproduction.
The theory is based on the assumption that the material under consideration consists of a large number of small, identical particles. These particles can either absorb or scatter light, and their behavior is characterized by two coefficients: the absorption coefficient (K) and the scattering coefficient (S). These coefficients are fundamental to the Kubelka-Munk model.
When it comes to translating sRGB (standard Red, Green, Blue) color data to spectral data for Kubelka-Munk analysis, it involves converting the RGB values to spectral reflectance values. The spectral reflectance is a measure of how much light of each wavelength is reflected by the material.
Here's a simplified explanation of the process:
1. Conversion to Spectral Data: Each RGB triplet represents a specific color in the sRGB color space. To convert this to spectral data, one can use a conversion algorithm or a color matching function. These functions provide a mapping between the sRGB values and the corresponding spectral reflectance values.
2. Kubelka-Munk Model: Once you have the spectral reflectance data, you can apply the Kubelka-Munk model. The model uses the absorption and scattering coefficients to calculate the reflectance and transmittance of the material as a function of wavelength. The Kubelka-Munk equations are complex and involve iterative calculations, but in essence, they express the reflectance and transmittance of a material in terms of its absorption and scattering properties.
3. Color Rendering: The resulting spectral data can be used to understand how the material will appear under different lighting conditions. By simulating the interaction of light with the material at various wavelengths, you can predict how it will be perceived in terms of color.
Mixbox also uses Kubelka-Munk, the main difference between spectral.js and Mixbox is that Mixbox uses spectral data from real pigments and spectral.js uses generated spectral data for red, green and blue (in the upcoming version, the current version uses white, cyan, magenta, yellow, red, green and blue).
Thanks for the fast response! I'd love to learn a little more about how you do the first bullet point. I'm familiar at a high level with the mixbox paper -- is your approach similar?
Another thing I was curious about is that in spectral_mix
in spectral.wgsl
it looks like you convert from sRGB -> linear RGB -> reflectance spectrum -> XYZ, and then use the XYZ data to figure out how to interpolate between the spectrums. Why is that necessary rather than using the interpolation value t
directly? Also... is XYZ the CIE 1931 XYZ color space?
I appreciate anything you're willing to share! This project is really cool :-)
My method is vastly different to MixBox's approach, they use a Newton Solver for finding the right amount of pigment concentrations for reconstructing the SPD for that color. As they have a fixed amount of available pigments they packed this in a Look-Up Texture (LUT), eliminating the need for the costly solver.
Spectral.js is based on the work done by Scott Allen Burns: Fast RGB to Spectrum Conversion for Reflectances
It uses 3 base SPD's for red, green and blue and can combine these for creating a SPD for the given input color, this is in the linear to reflectance function. (Note: In the upcoming version it uses 3 base SPD's in the current version more are used)
All mixing, using Kubelka-Munk theory, is done in spectral space, after that it's converted back to XYZ using the CIE 1964 (10 degree) Color Matching Functions weighted by the CIE D65 standard illuminant.
Concerning the concentration function (you are referring to this function right?): the reason it's not linearly interpolating is because the luminance is needed for determining the correct concentrations, this ensures the color is perceptually evenly distributed between 0 and 1 where 0 is the first color and 1 is the second color. This concentration function has been the hardest to figure out and took me a lot of time (although probably not unique I haven't seen this used anywhere else).
You can ask me anything, I love to share the knowledge I gained in the last year researching this and thank you for your kind words!
Oh wow, thanks! That's a lot for me to dig into.
the reason it's not linearly interpolating is because the luminance is needed for determining the correct concentrations, this ensures the color is perceptually evenly distributed between 0 and 1 where 0 is the first color and 1 is the second color.
Got it. I don't understand how you derived the concentration function, but this is enough for me to go off of for now. I'll let you know what other questions I have as I work through this!
Out of curiosity, what's the benefit of upgrading the library to use 3 SPDs? Is it more efficient? Naively I would guess that spectral_linear_to_reflectance
will need to do only half the calculations because it's summing 3 SPDs instead of 6, but I don't know if that's true or if it's why you're making the change. Do you expect different output than before?
You're not being naive, that's exactly why I want to reduce the number of base SPD's.
Output will be slightly different but not very noticeable.
Ah, nice! I have a couple of other performance questions, if that's ok!
In spectral_mix
you compute the luminance of the spectrums like so...
float l1 = spectral_reflectance_to_xyz(R1)[1];
float l2 = spectral_reflectance_to_xyz(R2)[1];
Is there a reason that you compute XYZ from the spectrum, and not from the lrgb colors? I expect lrgb -> XYZ to be faster than spectrum -> XYZ.
Another thing I'm curious about is that the RGB component functions looks a lot like logistic functions! Do you know if anyone has tried approximating the RGB component SPDs with logistic functions?
If you did this, and you computed xyz from lrgb above, then I think you could do away with the R1
and R2
vectors. You could compute the spectral components on the fly in this loop...
float R[SPECTRAL_SIZE];
for (int i = 0; i < SPECTRAL_SIZE; i++) {
float r1 = compute_reflectance_using_logistic_approximations(lrgb1, i);
float r2 = compute_reflectance_using_logistic_approximations(lrgb2, i);
float KS = (1.0 - t) * (pow(1.0 - r1, 2.0) / (2.0 * r1)) + t * (pow(1.0 - r2, 2.0) / (2.0 * r2));
float KM = 1.0 + KS - sqrt(pow(KS, 2.0) + 2.0 * KS);
R[i] = KM;
}
I'm not sure what kind of performance impact this would have, but it would use 38 * 4 * 2
fewer bytes per fragment. I'm guessing it'd be at least a little faster since clearing/moving that memory around takes time.
EDIT: Aha, I see that Scott Burns already had this idea, kind of. Hyperbolic functions look like logistic functions http://scottburns.us/reflectance-curves-from-srgb-10/
Is there a reason that you compute XYZ from the spectrum, and not from the lrgb colors? I expect lrgb -> XYZ to be faster than spectrum -> XYZ.
No particular reason for this, was just a design choise at that moment, I haven't fully optimized for speed. But you make a good point, this shaves off some multiplications. I have changed this in my dev version.
const RGB_XYZ = [
[0.4123908, 0.35758434, 0.18048079],
[0.21263901, 0.71516868, 0.07219232],
[0.01933082, 0.11919478, 0.95053215],
];
let luminance = dotproduct(RGB_XYZ[1], linear);
I don't fully understand your part about the logistics functions, wouldn't it be more computational costly if it needs a lot of iterration for calculating the best value on that wavelength? But if you have a great idea you can implement it, I'm curious how it works.
Nice!!
I think my previous suggestion about logistic functions was more complicated than necessary. A simpler version is something like this...
Now that you compute XYZ from linear RGB, you access R1
and R2
only in the loop. Since they're not needed anywhere else, you don't need to pre-allocate them and store them in arrays. You may as well generate their values as you need them. So you could write a function that looks like this...
// Returns the reflectance value for a specific frequency; it's a giant
// switch statement that maps frequency -> reflectance value for `lrgb`.
float spectral_linear_to_reflectance(vec3 lrgb, int frequency) {
float r = lrgb.r;
float g = lrgb.g;
float b= lrgb.b;
switch (frequency) {
case 380: return r * 0.03147571 + g * 0.49108579 + b * 0.97901834;
... all the others ...
}
}
... and call it from within the loop for each frequency when you need it.
Admittedly, I'm not sure how much faster this would make things. My intuition is that eliminating the two arrays would be meaningful since they're big, and they're allocated / cleared per fragment. But it's always worth measuring this kind of stuff.
When I find the time I will look into this.
I have to make a correction; I use the 1931 2deg CMF functions and not the 1964 10deg, I found the first give some more pleasing output.
Sure! No rush. I found another simpler way to think about this. You can declare your red, green and blue basis curves as a big constant array...
const vec3 rgbCurves[SPECTRAL_SIZE] = vec3[SPECTRAL_SIZE](
vec3(...),
vec3(...),
vec3(...),
...
);
And then your loop becomes...
float R[SPECTRAL_SIZE];
for (int i = 0; i < SPECTRAL_SIZE; i++) {
float r1 = dot(lrgb1, rgbCurves[i]);
float r2 = dot(lrgb2, rgbCurves[i]);
float KS = (1.0 - t) * (pow(1.0 - r1, 2.0) / (2.0 * r1)) + t * (pow(1.0 - r2, 2.0) / (2.0 * r2));
float KM = 1.0 + KS - sqrt(pow(KS, 2.0) + 2.0 * KS);
R[i] = KM;
}
Again... no pressure! But this feels even simpler so I felt like I should mention it.
Hi @rvanwijnen,
The library is amazing. Thank you very much for open sourcing it.
I have a question about the Saunderson correction. In the source code the corresponding line is commented out:
Does it provide unsatisfying result? For example, did the result look too dark?
@eugene-khyst I think you meant to tag @rvanwijnen :)
Hi @rvanwijnen,
The library is amazing. Thank you very much for open sourcing it.
I have a question about the Saunderson correction. In the source code the corresponding line is commented out:
Does it provide unsatisfying result? For example, did the result look too dark?
Hi,
In my experience the Saunderson correction works as intended but I decided to disable for the release to keep things simple. You can uncomment that line and use it if you like.
You can change the correction on this line: https://github.com/rvanwijnen/spectral.js/blob/9e5f7b921473730df8f67c4a583e21e3183f5159/spectral.js#L242
Hi! Do you have a write up of how this library works? I'd love to learn more about how it works and how it's different than the Mixbox approach.