Closed brandondube closed 6 years ago
Hi @brandondube,
Thanks for looking into that, this is great! There is actually #240 about this very specific topic.
I learned that this requires re-writting most of the color space transforms into a numpy ufunc-like format.
This parts scares me a little, would you mind detailing please?
A more efficient scheme is to make a meshgrid
I had started to implement faster diagrams on a related project using Vispy: https://github.com/colour-science/colour-analysis
What I don't know, is why I get imaginary colors. Any help in that regard would be appreciated.
sRGB cannot represent values outside its gamut, which is over half of the whole visible spectrum :) This issue is of interest in that regard: https://github.com/colour-science/colour/issues/191
This part scares me a little, would you mind detailing please?
Sure, poking around quickly, it seems the issues I had before where I would get errors using colour
's functions for transforms isn't the case, so it's possible there was an error in my code. I'll table that for now and look at it again later.
Regarding imaginary colors, the issue 'happens' upstream of sRGB conversion, I think?
I get
C:\Users\brand\Anaconda3\lib\site-packages\colour\models\rgb\transfer_functions\srgb.py:72: RuntimeWarning: invalid value encountered in less_equal
np.where(L <= 0.0031308, L * 12.92, 1.055 * (L ** (1 / 2.4)) - 0.055))
C:\Users\brand\Anaconda3\lib\site-packages\colour\models\rgb\transfer_functions\srgb.py:72: RuntimeWarning: invalid value encountered in power
np.where(L <= 0.0031308, L * 12.92, 1.055 * (L ** (1 / 2.4)) - 0.055))
The boundaries of the horseshoe map as follows with location (u',v') -> (X,Y,Z) format:
The xy points are correct on the 1931 diagram, but the XYZ values are huge, which I believe causes the errors in the -> sRGB conversion.
If I take the output of the XYZ->sRGB conversion, which is an ndarray of shape (samples,samples,3) and clip it to the range [0,1], it looks like this:
...Which is still a bit different to the correct colors.
I won't have time to look at that carefully before tonight but here is the code used with Vispy and np.meshgrid
for reference: https://github.com/colour-science/colour-analysis/blob/master/colour_analysis/visuals/diagrams.py#L44
I don't think there's a real need for vispy/gpu plotting -- removing the Delaunay
and scatter
code from the plots is all that is needed for high performance -- Delaunay
is expensive to compute, and scatter
with millions of points is expensive to display.
The timeits I posted above don't consider the view time -- it takes my laptop multiple seconds to draw the diagrams because there are so many points, while viewing an image, even with glyphs on top (plankian locust, wavelength locust, etc) takes mere milliseconds.
There is also consequence for saving figures -- a 16M-element scatter plot saved to a vector format (svg, eps, pdf) will have 16 million objects it in, and bring any renderer to its knees while one with an image will contain an embedded png or other raster graphic for that portion that draws quickly.
Anyway - the method I wrote code for works as follows:
[..., 0]
is R
6.2 [..., 1]
is G
6.3 [..., 2]
is BThis requires that the R,G,B "sub-arrays" fall inside the range [0,1], i.e. are valid (s)RGB coordinates. I know that some of each "end" of the horseshoe will have to clip since it isn't contained in sRGB, but I would still expect the interior to result in the correct colors.
I don't think there's a real need for vispy/gpu plotting
Absolutely, I was just giving that as a reference, I would not want Colour to depend on Vispy :)
Your process seems to be fine by the look of it.
Out of curiosity, did you try using colour.normalise_maximum
definition on your sRGB values before plotting? It should perform both normalisation and clipping.
I tried swapping colour.normalize_maximum -- if I use NaN outside the horseshoe, it will fail since max returns nan. However, the choice of the deadspace value is arbitrary, and if I use u,v=0.3,0.3 there (a valid pt in the horseshoe) and later cut them out, I get the same plot as doing the norm and clip separately.
I just realized because of how ndarrays are column-major order, my u' and v' were reversed, now the below code runs in <50ms and gets almost (?) the right plot
%%timeit
#from prysm.geometry import generate_mask
samples = 258
xlim = (0,1)
ylim = (0,1)
wvl = np.arange(420, 700, 10)
wvl_XYZ = colour.wavelength_to_XYZ(wvl)
wvl_uv = XYZ_to_uv(wvl_XYZ)
wvl_pts = wvl_uv*samples
wvl_mask = generate_mask(wvl_pts, samples)
mask_idxs = np.where(wvl_mask == 0)
u = np.linspace(xlim[0], xlim[1], samples)
v = np.linspace(ylim[0], ylim[1], samples)
uu, vv = np.meshgrid(u, v)
uu[mask_idxs] = 0.3
vv[mask_idxs] = 0.3
# stack u and v for vectorized computations
uuvv = np.stack((vv,uu), axis=2)
# map -> xy -> XYZ -> sRGB
xy = colour.Luv_uv_to_xy(uuvv)
xyz = colour.xy_to_XYZ(xy)
dat = colour.XYZ_to_sRGB(xyz)
# now make an alpha/transparency mask to hide the background
# and flip u,v axes because of column-major symantics
alpha = np.ones((samples,samples)) * wvl_mask
dat = np.swapaxes(np.dstack((dat, alpha)), 0, 1)
#dat /= dat[:, :, 0:2].max()
dat = np.clip(dat,0,1)
# lastly, duplicate the lowest wavelength so that the boundary line is closed
wvl_uv = np.vstack((wvl_uv, wvl_uv[0,:]))
fig, ax = plt.subplots(figsize=(8,8))
ax.imshow(dat,
extent=[*xlim, *ylim],
interpolation='None',
origin='lower')
ax.set(xlim=(0,0.65), xlabel='CIE u\'',
ylim=(0,0.625), ylabel='CIE v\'')
ax.plot(wvl_uv[:,0], wvl_uv[:,1], c='0', lw=3)
plt.grid('off')
plt.close(fig)
#plt.savefig('horseshoe_02.png', dpi=300, bbox_inches='tight')
output (note, I clip without prior normalization):
prysm.geometry.generate_mask
is relatively expensive and has O(n) time complexity, wrapping the mask generation in an lru_cache would reduce the time to make the figure to <10ms. Alternatively, the wavelengths could be hand selected to be few where the plot is "square," and dense in the rounded edge.
512x512 is enough for there to be no visible ragged edges, be it is just about perfect with 256x256 and the proper xlim/ylim (0-0.65 instead of 0-65).
If I normalize first by uncommented dat /= dat[:, :, 0:2].max(), the image is extremely gray, and the magenta part should still be more purple. What am I missing?
I got that with small tweaking:
# %%
import matplotlib.pyplot as plt
import numpy as np
import colour
from colour.plotting import *
#from prysm.geometry import generate_mask
samples = 258
xlim = (0, 1)
ylim = (0, 1)
illuminant = DEFAULT_PLOTTING_ILLUMINANT
wvl = np.arange(420, 700, 10)
wvl_XYZ = colour.wavelength_to_XYZ(wvl)
wvl_uv = colour.Luv_to_uv(colour.XYZ_to_Luv(wvl_XYZ, illuminant), illuminant)
wvl_pts = wvl_uv * samples
# wvl_mask = generate_mask(wvl_pts, samples)
# mask_idxs = np.where(wvl_mask == 0)
u = np.linspace(xlim[0], xlim[1], samples)
v = np.linspace(ylim[0], ylim[1], samples)
uu, vv = np.meshgrid(u, v)
# uu[mask_idxs] = 0.3
# vv[mask_idxs] = 0.3
# stack u and v for vectorized computations
uuvv = np.stack((vv,uu), axis=2)
# map -> xy -> XYZ -> sRGB
xy = colour.Luv_uv_to_xy(uuvv)
xyz = colour.xy_to_XYZ(xy)
dat = colour.XYZ_to_sRGB(xyz)
dat = colour.normalise_maximum(dat, axis=-1)
# now make an alpha/transparency mask to hide the background
# and flip u,v axes because of column-major symantics
alpha = np.ones((samples,samples)) # * wvl_mask
dat = np.swapaxes(np.dstack((dat, alpha)), 0, 1)
#dat /= dat[:, :, 0:2].max()
# lastly, duplicate the lowest wavelength so that the boundary line is closed
wvl_uv = np.vstack((wvl_uv, wvl_uv[0,:]))
fig, ax = plt.subplots(figsize=(8,8))
ax.imshow(dat,
extent=[xlim[0], xlim[1], ylim[0], ylim[1]],
interpolation='None',
origin='lower')
ax.set(xlim=(0,0.65), xlabel='CIE u\'',
ylim=(0,0.625), ylabel='CIE v\'')
ax.plot(wvl_uv[:,0], wvl_uv[:,1], c='0', lw=3)
As I look at your code, it is actually not that much different from colour.plotting. CIE_1931_chromaticity_diagram_colours_plot, the biggest change is that you are drawing to the image buffer directly instead of using pylab.scatter
which is much much faster. I'll test updating the code of that definition to see if it fares better.
It's now 2:30am my time so I'll be a bit brief, but I looked over your code and made the same change (normalize before tacking on the alpha channel) and the plot is now correct! :D
As a naggle, I still get a warning for invalid value in power:
np.where(L <= 0.0031308, L * 12.92, 1.055 * (L ** (1 / 2.4)) - 0.055))
~colour\models\rgb\transfer_functions\srgb.py:72
any idea what that is? The only power I see is L^(1/2.4), maybe it's a python 3 int/float division problem? Really not sure.
The code for generate_mask is here: https://github.com/brandondube/prysm/blob/master/prysm/geometry.py#L179
It could be stolen and added to colour if that is within scope. Basically it fills the interior of a convex polygon defined by its vertices in array index coordinates (i.e. for 128x128, the "units" are in the range [0,127]. That makes adjustable bounds / not generating a mask on [0,1]x[0,1] a bit messy but it's not impossible to deal with.
The time complexity is not very good, so it's a little bit slow. I think it is still faster than scipy delaunay, and numba could make it fly :) Using wvl_mask = [400, 430, 460, 470, 480, 490, 500, 505, 510, 515, 520, 525, 530, 535, 700]
reduces the # of vertices in the polygon and gives a good boundary, visually.
Excellent!
I'm using scipy.spatial.Delaunay
usually for that kind of stuff, I suppose it is fast because it wraps QHull, and it is heavily used in colour.volume
and colour-analysis.
I quickly adjusted the definition we ship to use your great approach, I have not timed it and it is dumb as I'm filling just too much void space now, but it gives an idea:
import matplotlib
import pylab
from scipy.spatial import Delaunay
from colour import (Luv_to_uv, XYZ_to_Luv, tstack, xy_to_XYZ, Luv_uv_to_xy,
normalise_maximum, XYZ_to_sRGB, tsplit)
def CIE_1976_UCS_chromaticity_diagram_colours_plot(
samples=256,
cmfs='CIE 1931 2 Degree Standard Observer',
**kwargs):
"""
Plots the *CIE 1976 UCS Chromaticity Diagram* colours.
Parameters
----------
surface : numeric, optional
Generated markers surface.
samples : numeric, optional
Samples count on one axis.
cmfs : unicode, optional
Standard observer colour matching functions used for diagram bounds.
Other Parameters
----------------
\**kwargs : dict, optional
{:func:`boundaries`, :func:`canvas`, :func:`decorate`,
:func:`display`},
Please refer to the documentation of the previously listed definitions.
Returns
-------
Figure
Current figure or None.
Examples
--------
>>> CIE_1976_UCS_chromaticity_diagram_colours_plot() # doctest: +SKIP
"""
settings = {'figure_size': (8, 8)}
settings.update(kwargs)
canvas(**settings)
cmfs = get_cmfs(cmfs)
illuminant = DEFAULT_PLOTTING_ILLUMINANT
triangulation = Delaunay(
Luv_to_uv(XYZ_to_Luv(cmfs.values, illuminant), illuminant),
qhull_options='QJ Qf')
xx, yy = np.meshgrid(
np.linspace(0, 1, samples), np.linspace(0, 1, samples))
xy = tstack((xx, yy))
mask = triangulation.find_simplex(xy) < 0
XYZ = xy_to_XYZ(Luv_uv_to_xy(xy))
RGB = normalise_maximum(XYZ_to_sRGB(XYZ, illuminant), axis=-1)
RGB[mask] = 1
settings.update({
'x_ticker': False,
'y_ticker': False,
'bounding_box': (0, 1, 0, 1)
})
settings.update(kwargs)
ax = matplotlib.pyplot.gca()
ax.imshow(RGB,
extent=[0, 1, 0, 1],
interpolation='None',
origin='lower')
matplotlib.pyplot.setp(ax, frame_on=False)
boundaries(**settings)
decorate(**settings)
return display(**settings)
CIE_1976_UCS_chromaticity_diagram_colours_plot()
Almost same than above but with poor man antialiasing:
import matplotlib
import pylab
from scipy.ndimage.filters import convolve
from scipy.spatial import Delaunay
from colour import (DEFAULT_FLOAT_DTYPE, Luv_to_uv, XYZ_to_Luv, tstack, xy_to_XYZ, Luv_uv_to_xy,
normalise_maximum, XYZ_to_sRGB, tsplit)
def CIE_1976_UCS_chromaticity_diagram_colours_plot(
samples=256,
cmfs='CIE 1931 2 Degree Standard Observer',
**kwargs):
"""
Plots the *CIE 1976 UCS Chromaticity Diagram* colours.
Parameters
----------
surface : numeric, optional
Generated markers surface.
samples : numeric, optional
Samples count on one axis.
cmfs : unicode, optional
Standard observer colour matching functions used for diagram bounds.
Other Parameters
----------------
\**kwargs : dict, optional
{:func:`boundaries`, :func:`canvas`, :func:`decorate`,
:func:`display`},
Please refer to the documentation of the previously listed definitions.
Returns
-------
Figure
Current figure or None.
Examples
--------
>>> CIE_1976_UCS_chromaticity_diagram_colours_plot() # doctest: +SKIP
"""
# settings = {'figure_size': (8, 8)}
settings = {}
settings.update(kwargs)
canvas(**settings)
cmfs = get_cmfs(cmfs)
illuminant = DEFAULT_PLOTTING_ILLUMINANT
triangulation = Delaunay(
Luv_to_uv(XYZ_to_Luv(cmfs.values, illuminant), illuminant),
qhull_options='QJ Qf')
xx, yy = np.meshgrid(
np.linspace(0, 1, samples), np.linspace(0, 1, samples))
xy = tstack((xx, yy))
mask = (triangulation.find_simplex(xy) < 0).astype(DEFAULT_FLOAT_DTYPE)
kernel = np.array([[0, 1, 0],
[1, 2, 1],
[0, 1, 0]]).astype(DEFAULT_FLOAT_DTYPE)
kernel /= np.sum(kernel)
mask = convolve(mask, kernel)[:, :, np.newaxis]
XYZ = xy_to_XYZ(Luv_uv_to_xy(xy))
RGB = normalise_maximum(XYZ_to_sRGB(XYZ, illuminant), axis=-1)
RGB[np.isnan(RGB)] = 1
RGB = np.ones(RGB.shape) * mask + RGB * (1 - mask)
settings.update({
'x_ticker': False,
'y_ticker': False,
'bounding_box': (0, 1, 0, 1)
})
settings.update(kwargs)
ax = matplotlib.pyplot.gca()
ax.imshow(RGB,
extent=[0, 1, 0, 1],
interpolation=None,
origin='lower')
matplotlib.pyplot.setp(ax, frame_on=False)
boundaries(**settings)
decorate(**settings)
return display(**settings)
CIE_1976_UCS_chromaticity_diagram_colours_plot()
By the way, the current diagram plotting definitions, i.e. *_chromaticity_diagram_plot
, actually don't generate the scatter points, they only read the images generated by the *_chromaticity_diagram_colours_plot
, those images are quite big, i.e. over 3K wide, I'm assuming that this is what taking long: reading, decoding, plotting.
Would the anti-aliasing work better if you applied a convolution to the RGB as well as the mask? If I read it correctly it is applying a convolution blur to the mask, and then multiplying it by RGB. But RGB pixels outside the horseshoe will be white. Ideally I suppose you apply an "expand" filter to the RGB before the blurred mask.
@Nick-Shaw : You mean antialiasing after applying the mask?, right now I'm doing a over b
where a
is white premultiplied by mask and b
is the RGB square of colours.
Maybe. Not thought it through fully. It feels like there might be edge pixel issues similar to incorrect handling of premultiplied/unpremultiplied alpha.
Ignore me. I realise the RGB square has colour edge to edge until you multiply by the mask, i.e. a straight alpha, so blurring only the mask is fine.
Yes exactly!
RGB
Mask
Yes, I did that myself. Should have done so before posting!
I was assuming that points outside the horseshoe would produce nan
because they are imaginary, and your isnan
would then make them white. But I was wrong.
I have pushed a branch here with some updates: https://github.com/colour-science/colour/tree/feature%2Fchromaticity_diagrams
I'll have to update the code tomorrow because the diagrams have lost their alpha channel.
I also noticed a line at low resolution in the CIE 1976 UCS diagram, I suppose some NaNs are playing nasty here, so another check to do.
Important
Please notice that all the chromaticity diagram related definition and resources will be renamed in the near future for consistency with the remaining of the API:
colour.plotting.CIE_1931_chromaticity_diagram_plot
should become colour.plotting.chromaticity_diagram_plot_CIE_1931
The output looks good -- do you get the warnings from power
when you make these?
some timeit results
old method:
%timeit colour.plotting.CIE_1976_UCS_chromaticity_diagram_plot(standalone=False)
>>> 609 ms ± 54.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
New method w/ 256 samples in [0,0.7] on u' and v'
%timeit make_1976_diagram()
>>> 91.6 ms ± 5.39 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
same laptop, but on battery / throttled right now.
This doesn't include the draw time, and ~10ms are spent in make_1976_diagram destroying the figure object so it doesn't display from %matplotlib inline
jupyter magic.
Including the drawing, the times are 1.64 s ± 95.2ms and 82.6 ms ± 6.27 ms, respectively.
Note these are with my version (no Delaunay) and the rest of the things that happen inside a plotting call from colour are also not present.
Still, the speedup is ~6x - 20x depending how you slice it.
do you get the warnings from power when you make these?
Yes! There are multiple warnings happening during the various transformations. We have colour.filter_warnings
definition to filter all the Python and/or API warnings.
The power related ones are because some negative numbers (out of sRGB gamut) are passed to the sRGB OETF.
I have updated the cached images to have alpha channel and doing so I noticed that matplotlib does not handle it properly: https://github.com/matplotlib/matplotlib/issues/3343#issuecomment-347445851
The updated code is now in develop.
So for reference Matplotlib expects straight alpha as per matplotlib/matplotlib#9906, so I have regenerated the diagrams and updated the source code accordingly. I will close that issue for now, feel free to add any comments though.
I ask that code derived from mine be properly attributed under the MIT license, or removed from colour. Intellectually, expression of computing the background image colours directly on the grid is mine; using delaunay instead of a custom function for the same purpose does not distinguish the two.
Hi Brandon,
I'm very sorry to see that you are acting like a very young kid.
For reference Colour has been computing Chromaticity Diagram colours using meshgrid for over 2.5 years: https://github.com/colour-science/colour/blob/e9b0ccbbe13db280c8296557ddef572b94415e09/colour/plotting/diagrams.py or here in Colour - Analysis: https://github.com/colour-science/colour-analysis/blob/master/colour_analysis/visuals/diagrams.py#L40
If you take a careful look at our 2.5 years old code you will notice that the CIE_1931_chromaticity_diagram_colours_plot
definition your original post was complaining about the speed is not part of the public API, people are using CIE_1931_chromaticity_diagram_plot
which loads a very high resolution image into an array and pass it directly to pylab.imshow
to avoid jaggies in the background:
image = matplotlib.image.imread(
os.path.join(PLOTTING_RESOURCES_DIRECTORY,
'CIE_1931_Chromaticity_Diagram_{0}.png'.format(
cmfs.name.replace(' ', '_'))))
pylab.imshow(image, interpolation='nearest', extent=(0, 1, 0, 1))
Your great idea (remember one cannot copyright ideas) is to pass an RGB array directly to pylab.imshow
. Now with that in mind can you please show us the lines of code we copied from you, I bet you will have a very hard time because even the example I pasted in this thread here: https://github.com/colour-science/colour/issues/362#issuecomment-347101656 is not in use in Colour.
By the way, should I ask you to attribute the fixes I did to your implementation under the New BSD License?
It's now 2:30am my time so I'll be a bit brief, but I looked over your code and made the same change (normalize before tacking on the alpha channel) and the plot is now correct! :D
Now something we do, and pretty much no one else does is giving people attribution if they participate in issues discussions which is why you are listed in the following locations:
## colour.plotting
- `colour.plotting.CIE_1931_chromaticity_diagram_plot`: (@brandondube, @kelsolaar)
- Name: `chromaticity_diagram_plot_CIE1931`
- Signature: `chromaticity_diagram_plot_CIE1931(cmfs='CIE 1931 2 Degree Standard Observer', show_diagram_colours=True, use_cached_diagram_colours=True, **kwargs)`
- `colour.plotting.CIE_1960_UCS_chromaticity_diagram_plot`: (@brandondube, @kelsolaar)
- Name: `chromaticity_diagram_plot_CIE1960UCS`
- Signature: `chromaticity_diagram_plot_CIE1960UCS(cmfs='CIE 1931 2 Degree Standard Observer', show_diagram_colours=True, use_cached_diagram_colours=True, **kwargs)`
- `colour.plotting.CIE_1976_UCS_chromaticity_diagram_plot`: (@brandondube, @kelsolaar)
- Name: `chromaticity_diagram_plot_CIE1976UCS`
- Signature: `chromaticity_diagram_plot_CIE1976UCS(cmfs='CIE 1931 2 Degree Standard Observer', show_diagram_colours=True, use_cached_diagram_colours=True, **kwargs)`
I'm happy to give a reference to yourself and this thread in the module itself but since we did not copied any of your code, we don't have to do anything regarding licensing.
I'm back again!
I was frustrated by the slow speed of the chromaticity diagram plotters, so I have begun to re-implement them. I learned that this requires re-writting most of the color space transforms into a numpy ufunc-like format. Versions of the necessary functions that have this format can be found here. Note these versions require that the input be a numpy array rather than an iterable, but the line
var = np.asarray(var)
can be added to the top of each function to remove this requriement. Some additional logic could be added so that e.g. if the input was a list, the output is a list, but this is a detail.The current plotters, as best I understand, generate a 4000x4000 point scatter plot and color the dots with sRGB tones -- matplotlib wasn't made for plotting 16 million data points quickly!
A more efficient scheme is to make a meshgrid in the desired color space (in my case, u' v', though this method is general), "block out" values outside of the horseshoe, and shade with sRGB tones. Additional things like plankian locusts can be added, but the performance issue is with the massive number of scatter points.
Progress towards this can be found here. Unfortunately, a warning is thrown in the XYZ->sRGB conversion; I presume this is because there are imaginary colors present. An image of the result, in sRGB, is below.
I would appreciate any help with debugging this and then pulling the changes into
colour
.timeit results from my laptop with an i7-7700HQ (4c/8t @ 3.6Ghz)
Since this includes expensive warning prints, I assume that this will run in closer to 25ms (a 20x improvement!) when it works properly. A 128x128 grid would also be ~4x faster, bringing the time to less than 10ms. This would allow time to budget e.g. expensive lanczos interpolation of the chromatic surface, achieving similar or superior visual quality in greatly reduced time.
What I don't know, is why I get imaginary colors. Any help in that regard would be appreciated.