DaltonLens / daltonlens.org

DaltonLens public website
https://daltonlens.org
Apache License 2.0
4 stars 0 forks source link

Review of Open Source Color Blindness Simulations | DaltonLens #4

Open utterances-bot opened 2 years ago

utterances-bot commented 2 years ago

Review of Open Source Color Blindness Simulations | DaltonLens

There are lots of available programs for color blindness simulation, but many are actually very inaccurate. Let’s find out what methods we can trust and what code can be safely copy/pasted :)

https://daltonlens.org/opensource-cvd-simulation/

kachkaev commented 2 years ago

Thanks for this literature review! I worked on a datavis recently and needed to check the robustness of my color choices, so this topic is very relevant. Here are the two resources I found useful:

Here is my take inspired by the above material: JS code. By using filter CSS property and React, it was quite easy to simulate different conditions and test my web vis in real time:

Screenshot 2021-11-15 at 10 17 24
nburrus commented 2 years ago

Hi, thanks for your comment! I've just had a look at your code / references, and I have two comments:

1) The SVG matrix filters should be applied in the linearRGB space, not sRGB. I think the error comes from the original dev.to post, which incorrectly assumes that these filters should operate on the raw sRGB values. It is incorrect as the conversion to the LMS space only makes sense after removing the sRGB non-linearity, otherwise you can't go to LMS with a simple matrix multiplication. So I would change color-interpolation-filters="sRGB" to color-interpolation-filters="linearRGB". One obvious consequence of not doing it is that pure red becomes way too dark for protans (which is the one I can evaluate myself). Going to add a comment to that post.

~Otherwise the Chrome Devtools blog seems correct by conserving the default value for the interpolation filter, which should be linearRGB. But probably better to explicitly enforce it to be 100% sure.~ [Edit] I take this back, it seems that Chrome rendering filters are actually happening in the sRGB space too and are way too dark. Going to report the issue there as well. [Update on Nov 16, 2021]: the issue should now be fixed in Chromium, see the bug report.

2) For tritanopia the single matrix approaches are unfortunately all pretty bad. The only method I know that works reasonably well is the one of Brettel, Viénot & Mollon 1997 . But I'm not sure that it can be implemented with SVG filters (you can see a simple C version in libDaltonLens).

ndesmic commented 2 years ago

Thanks for bringing this to my attention. This is very useful info!

Waitsnake commented 2 years ago

Thanks for your great work on this overview of reviewing and comparing different algorithms to simulate CVD that are public available! Since I have protanomaly myself I was also looking around for a SW that can compensate my weakness at least on a computer monitor a while ago. I also tried out different available SW and looked into different available algorithms, but not as deep and scientific as you did here. But I just saw a similar picture as you describe here: Most algorithms are not generic and can only simulate (and some compensate) basic protanopia, deuteranopia and tritanopia and ignore the fact that there are weaker forms of this deficiency like protanomaly, deuteranomaly and tritanomaly. Many algorithms I saw had just copy & pasted the same matrices from each other and lagging explanations how they came up with the values or don't explain how calculate different matrices for different deficiency levels. I hope your work helps that in future we will end up with better SW if the programmers had this kind of overview.

Myndex commented 1 year ago

I like the in-depth discussion on this site, and your conclusions very much echo my own from when we were developing the Myndex CVD sim a few years ago, we settled on Viénot 99 for protan/deutan, Brettel 97 for tritan, and also included an experimental blue cone monochromacy sim.

On a second page is an experimental sRGB Sim which works per Grassman laws as opposed to LMS space. All the Sims show you multiple variants at once. Example photos or process one of your own, it's all JS so processes are on your local machine.

Nice work, this is a great resource I'm going to link to.

Thank you for reading

nburrus commented 1 year ago

Thanks for your comment @Myndex! I'm curious if you ended up publishing your research about the direct approach for sRGB monitors?

Myndex commented 1 year ago

Thanks for your comment @Myndex! I'm curious if you ended up publishing your research about the direct approach for sRGB monitors?

Hi @nburrus,

No I haven't published yet, it's part of a larger project for a perceptually uniform color space, also related to the APCA. And that said, I'm behind on a lot of papers to be published at this point LOL.

schulzch commented 1 year ago

Thanks for the review.

I would like to point out to our https://github.com/UniStuttgart-VISUS/visual-system-simulator, which supports a few color deficiency simulation approaches as well as other visual deficiencies in case you’re interested. It runs on desktop hardware, Android and some mixed reality hardware.

We also ran a study on accessibility in visualization: Katrin Angerbauer, Nils Rodrigues, Rene Cutura, Seyda Öney, Nelusa Pathmanathan, Cristina Morariu, Daniel Weiskopf, and Michael Sedlmair. 2022. Accessibility for Color Vision Deficiencies: Challenges and Findings of a Large Scale Study on Paper Figures. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 134, 1–23. https://doi.org/10.1145/3491102.3502133

nburrus commented 1 year ago

finally took some time to check it, and very interesting work @schulzch, thanks for sharing! I wonder if there is a pdf preprint somewhere for the associated 2019 paper "A Framework for Pervasive Visual Deficiency Simulation"? Could not find one after a quick search.

schulzch commented 1 year ago

I'm afraid no and I didn't check if I can upload it to arXiv or something like that - check your mailbox ;)

The more recent stuff (see the wgpu branch) is described in this thesis: Visualization of differences in perception caused by vision deficiency https://elib.uni-stuttgart.de/handle/11682/11795

Especially the RGCf patterns are pretty cool - it is a reimplementation of Aleman, A., Wang, M. & Schaeffel, F. Reading and Myopia: Contrast Polarity Matters. Sci Rep 8, 10840 (2018). https://www.nature.com/articles/s41598-018-28904-x

Myndex commented 1 year ago

..."Thanks for your comment @Myndex! I'm curious if you ended up publishing your research about the direct approach for sRGB monitors?"...

Hi @nburrus,

While I have not published an official study, as it is part of the SACAM project, there IS a simulator using this basis with sRGB (i.e. a Grassman's law CVD simulator).

THIS IS EXPERIMENTAL:

Here's the link: https://www.myndex.com/CVD/sRGBCVD

Using this approach I found I was able to duplicate the look of the Brettel/Viénot model with much less math, though that's not really the reason for developing this.

NOTE: I do recommend the standard Brettel/Viénot et alia model as used on my other sims, if for no other reason than Brettel/Viénot is widely considered the accurate standard.