Open mikebmcl opened 6 years ago
Continuing to research differences in operators, below are color_burn
variants:
// Cairo:
auto f = [](float a, float b){
if( 1.f - b <= numeric_limits<float>::min() )
return 1.f;
return a > numeric_limits<float>::min() ? 1.f - min(1.f, (1.f - b) / a) : 0.f;
};
// CoreGraphics:
auto f = [](float a, float b){
if( 1.f - b <= numeric_limits<float>::min() )
return 1.f;
return a > numeric_limits<float>::min() ? 1.f - (1.f - b) / a : numeric_limits<float>::lowest();
};
soft_light
is my favorite though, period!
// Cairo:
auto f = [](float a, float b){
if( a <= 0.5 ) {
return b - (1.f - 2.f * a) * b * (1.f - b);
}
else {
if( b <= 0.25 )
return b + (2.f * a - 1.f) * (((16.f * b - 12.f) * b + 4.f) * b - b);
else
return b + (2.f * a - 1.f) * (sqrt(b) - b);
}
};
// CoreGraphics:
auto f = [](float a, float b){
return (1.f - 2.f * a) * b * b + 2.f * a * b;
};
While browsing I've found at least five different formulas, all called "Soft Light". And it looks like nobody cares about these differences - all middleware/libraries around (Cairo and Moz2D for instance) just pretend that the underlying math is the same or these differences are too minor to bother. So I guess we simply can't demand a precise following the "canonical" formulas written in the paper. Perhaps the paper should convey a recommended math with allowance to optimize or tune these formulas to some extent.
Apple's variants of formulas were added here: a15bbd3668bf30a3be3ab92e0edca3fa76715be0.
Continuing the discussion about pixel formats. IMHO it's not feasible to demand any particular underlying pixel layout from an unknown backend. This might give a scope of a problem:
enum pixel_layout {
// 32 bits per pixel
b8g8r8a8,
a8r8g8b8,
r8g8b8a8,
a8b8g8r8,
// 16 bits per pixel
r5g6b5,
b5g6r5,
r5g5b5a1,
a1r5g5b5,
b5g5r5a1,
a1b5g5r5,
// 8 bits per pixel
a8
};
And also there is the alpha_mode enumeration, which basically triples this list. (https://github.com/mikebmcl/P0267_RefImpl/blob/master/P0267_RefImpl/P0267_RefImpl/xinterchangebuffer.h) While many graphics API can accept any arbitrary pixel layout (like CoreGraphics does), this can cause a significant performant loss on bytes rearranging if target display surface has another pixel layout (and this layout usually can't be controlled). Just for example, macOS tends to treat RGBA32 as a8b8g8r8, whiles iOS uses b8g8r8a8. So IMHO the paper should only demand some storage capacity per channel from a single pixel, instead of mandating its underlying layout.
Changed all layout and other requirements for format
to be implementation defined in D0267R8. Also added a mechanism for implementations to provide their own support for additional formats. Right now only three formats are part of the enum class in the proposal. They are required to be supported only for basic_image_surface
:
format::argb32
,format::xrgb32
, andThese seem to have universal support among graphics technologies for render targets (i.e. basic_image_surface
objects).
That still leaves us with the compositing_op
discrepancies. We'll talk offline about those.
Numerically the differences between these 3 troublesome operators can be significant in some cases (especially with low alpha values), way beyond any reasonable Epsilon for calculation error. However, visually this difference is hard to perceive. I build sample images which show blending distributions: X axis defines the background color(rgba_color(x, x, x)
) and Y axis defines a blending color(rgba_color(y, y, y)
):
color_dodge in Cairo and in CoreGraphics:
color_burn in Cairo and in CoreGraphics:
soft_light in Cairo and in CoreGraphics:
Can anyone spot the difference? (the brightness can differ a bit, since atm CG backend saves images with the Generic RGB color profile, whiles Cairo does it with sRGB)
For comparison, this is hard_light, which is numerically equal up to epsilon, Cairo and CoreGraphics:
I can spot it, yeah. Looking at the side-by-side images it appears we have a color banding problem. To me it's easiest to see in the soft_light images. Both have some banding, but the locations are different and, to me, it is slightly more prominent in CG.
This is likely a result of Generic RGB vs. sRGB (but could potentially be a result of PNG encoding even though PNG requires the use of the DEFLATE compression algorithm and thus should produce the same results on all conforming implementations provided that the same compression level is used).
We know there is a problem. We need any and all locations where problems appear in the process. We need to do so for all backends since we should not assume that the cairo backend or any other is correct. We locate this by writing out the raw pixel data directly and making comparisons using the backend implementation code versus the results of the reference implementation of the image file format (e.g. libPNG for PNG images). There are several steps I can think of right now that we need:
::std::numeric_limits<float>::is_iec559 == true
there should be absolute equality here since we are simply generating data that is valid pixel data into a memory buffer, write that raw pixel data out to files, and then using the reference implementations of each image file format to save that data into the resulting image file format. This lets us know whether or not there's a problem in the implementations of the reference implementation libraries on each platform. If there's a discrepancy here it's a serious impediment to the testing process and we need to locate the reason.
Continue discussion of ISOCPP-2D/io2dts#4 here.