Closed Wagyx closed 4 years ago
Hi @Wagyx,
Sorry for the late answer to a great question :) So basically the colour. sd_to_XYZ
definition and the various methods it supports are intrinsically dimensionless/unitless.
dw/∆λ
is thus likewise, dimensionless, its purpose is only to account for the step/bin size of the discretized data. For example, assuming you had a bin size of 1nm, its value would be 1, likewise, for 5m bin size, it would be 5 and you would certainly not want to have another value for it here. The assumption is that the spectral distribution, CMFS (and illuminant) have compatible units. With that in mind, you should do the scaling on your end and ensure that the values are compatible.
Now that being said, I reckon that we should probably do something at the colour.sd_blackbody
definition level. As you noted correctly, the values stored are in W/m2/sr/m but for reference wavelengths in nanometers. While this is not problematic if you are working with relative values (almost exclusively everything we do with that function), it will be if you are not aware of it when working with absolute units and will bite you.
Here is some code to illustrate that:
import numpy as np
import colour
nm_to_m = lambda x: x * 1e-9
m_to_nm = lambda x: x * 1e9
sd = colour.sd_blackbody(5800)
print(sd.to_series().describe())
# count 4.210000e+02
# mean 2.366771e+13
# std 2.694523e+12
# min 1.789272e+13
# 25% 2.149788e+13
# 50% 2.422853e+13
# 75% 2.617677e+13
# max 2.688004e+13
# Name: 5800K Blackbody, dtype: float64
# https://www.opticsthewebsite.com/OpticsCalculators
# Total output over waveband: 9.93929e+2 Watts/cm^2-sr
# Total output over all wavelengths: 2.04242e+3 Watts/cm^2-sr
# https://www.spectralcalc.com/blackbody_calculator/blackbody.php
# Spectral Radiance: 2.68831e+07 W/m2/sr/µm
# Band Radiance: 9.9462e+06 W/m2/sr
# https://www.wolframalpha.com/input/?i=5800+degrees+kelvin+blackbody+radiance&assumption=%7B%22F%22%2C+%22PlanckRadiationLaw%22%2C+%22lambda2%22%7D+-%3E%220.78%22&assumption=%7B%22F%22%2C+%22PlanckRadiationLaw%22%2C+%22lambda1%22%7D+-%3E%220.36+microns%22&assumption=%7B%22F%22%2C+%22PlanckRadiationLaw%22%2C+%22lambda%22%7D+-%3E%220.5+micron%22
# spectral radiance as function of wavelength | 2.6882 W/(sr cm^2)/nm (watts per steradian square centimeter per nanometer)
# = 2.6882×10^13 W/(sr m^2)/m (watts per steradian square meter per meter)
# = 2688.2 flicks
# Spectral radiance at 500nm is as expected
print('Spectral Radiance: {0:.4e}W/m2/sr/m'.format(sd[500]))
# Spectral Radiance: 2.6880e+13W/m2/sr/m
print('Spectral Radiance: {0:.4e}W/m2/sr/nm'.format(nm_to_m(sd[500])))
# Spectral Radiance: 2.6880e+04W/m2/sr/nm
# Integrated radiance will however require scaling of either values or wavelengths:
radiance = np.trapz(nm_to_m(sd.values), sd.wavelengths)
print('Integrated Radiance: {0:.4e}W/m2/sr'.format(radiance))
# Integrated Radiance: 9.9451e+06W/m2/sr
radiance = np.trapz(sd.values, nm_to_m(sd.wavelengths))
print('Integrated Radiance: {0:.4e}W/m2/sr'.format(radiance))
# Integrated Radiance: 9.9451e+06W/m2/sr
I'm tempted to scale the values directly once for all. Can you try doing something like that in your code and let me know if it gets you where you want:
sd = colour.sd_blackbody(5800) * 1e-9
Thank you for clarifying the dimensionless aspect of the function sd_to_XYZ. Indeed, I've already tried the scaling of the values outside the function and it worked, obviously. I deal with absolute values so I guess the responsibility of having the correct units lies in my hands.
We should probably 1) document that properly and 2) maybe as I was suggesting above scale the values. I will give it a stab for testing and see what is happening.
@MichaelMauderer : What do you think about that? Question being: Should we transform the values so that absolute computation does not require any scaling.
As predicted, it does not change anything on our end, the only test failing besides those of colour.sd_blackbody
is a doctest for colour.temperature.uv_to_CCT_Ohno2013
:
Failed example:
uv_to_CCT_Ohno2013(uv, cmfs) # doctest: +ELLIPSIS
Expected:
array([ 6.5074738...e+03, 3.2233461...e-03])
Got:
array([ 6.50747380e+03, 3.22334609e-03])
Related to precision, so no problem at all.
I think better than changing the value and subtly breaking code using it, might be adding the current behaviour to the doc and create a new function with the added scaling. Maybe also deprecate the current function and add a new function with nm
in the name as an explicit replacement.
Alternatively, what about having an argument that enables the scaling (but is off by default) while issuing a warning each time the definition is used saying that the values will be scaled in a future release and once this done, warning again saying that values have been scaled for a few releases. We are still in alpha after all ;)
As I was going through Mitsuba 2.0, I came across this: https://github.com/mitsuba-renderer/mitsuba2/blob/master/src/spectra/blackbody.cpp#L80
They effectively scale the spectrum, maybe we could say that this is the practice, document the expected units properly and change it.
Hello,
I am generating black body spectral radiance using
sd= colour.sd_blackbody(5800)
which gets values between 1.78e13 and 2.68e13 in W/sr/m²/m as expected. Then, I convert the spectrum to xyz in order to get the visual Luminance (in cd/m²) from the Y.I get a value of 1.86922064558e18 for the Y when the true magnitude should be a 9. I know I could correct for that using the k factor as it already converts from watt to lumen (that's the 683) and changes the range from 0-100 to 0-1.
I have looked into the code and found the mistake: you are using the spectral interval
dw
in nanometers but the spectrum is in meters (for the wavelength dimension). It is easy to fix for the "Integration" method by simply multiplyingdw
by 1e-9 but I don't know about the other methods. The line I am taking about is in tristimulus.py.dw = cmfs.shape.interval # *1e-9
Or perhaps I am getting things wrong ? I am using version 0.3.14 by the way.