cpp-io2d / P0267_RefImpl

Reference Implementations of P0267, the proposed 2D graphics API for ISO C++
Other
313 stars 115 forks source link

Compatibility issues #75

Open mikebmcl opened 6 years ago

mikebmcl commented 6 years ago

Continue discussion of ISOCPP-2D/io2dts#4 here.

mikekazakov commented 6 years ago

Continuing to research differences in operators, below are color_burn variants:

    // Cairo:
    auto f = [](float a, float b){
       if( 1.f - b <= numeric_limits<float>::min() )
            return 1.f;
       return a > numeric_limits<float>::min() ? 1.f - min(1.f, (1.f - b) / a) : 0.f;
    };

    // CoreGraphics:
    auto f = [](float a, float b){
       if( 1.f - b <= numeric_limits<float>::min() )
            return 1.f;
       return a > numeric_limits<float>::min() ? 1.f - (1.f - b) / a : numeric_limits<float>::lowest();
    };
mikekazakov commented 6 years ago

soft_light is my favorite though, period!

    // Cairo:
    auto f = [](float a, float b){
        if( a <= 0.5 ) {
            return b - (1.f - 2.f * a) * b * (1.f - b);
        }
        else {
            if( b <= 0.25 )
                return b + (2.f * a - 1.f) * (((16.f * b - 12.f) * b + 4.f) * b - b);
            else
                return b + (2.f * a - 1.f) * (sqrt(b) - b);
        }
    };

    // CoreGraphics:
    auto f = [](float a, float b){
        return (1.f - 2.f * a) * b * b + 2.f * a * b;
    };

While browsing I've found at least five different formulas, all called "Soft Light". And it looks like nobody cares about these differences - all middleware/libraries around (Cairo and Moz2D for instance) just pretend that the underlying math is the same or these differences are too minor to bother. So I guess we simply can't demand a precise following the "canonical" formulas written in the paper. Perhaps the paper should convey a recommended math with allowance to optimize or tune these formulas to some extent.

Apple's variants of formulas were added here: a15bbd3668bf30a3be3ab92e0edca3fa76715be0.

mikekazakov commented 6 years ago

Continuing the discussion about pixel formats. IMHO it's not feasible to demand any particular underlying pixel layout from an unknown backend. This might give a scope of a problem:

   enum pixel_layout {
        // 32 bits per pixel
        b8g8r8a8,
        a8r8g8b8,
        r8g8b8a8,
        a8b8g8r8,
        // 16 bits per pixel
        r5g6b5,
        b5g6r5,
        r5g5b5a1,
        a1r5g5b5,
        b5g5r5a1,
        a1b5g5r5,        
        // 8 bits per pixel
        a8
    };

And also there is the alpha_mode enumeration, which basically triples this list. (https://github.com/mikebmcl/P0267_RefImpl/blob/master/P0267_RefImpl/P0267_RefImpl/xinterchangebuffer.h) While many graphics API can accept any arbitrary pixel layout (like CoreGraphics does), this can cause a significant performant loss on bytes rearranging if target display surface has another pixel layout (and this layout usually can't be controlled). Just for example, macOS tends to treat RGBA32 as a8b8g8r8, whiles iOS uses b8g8r8a8. So IMHO the paper should only demand some storage capacity per channel from a single pixel, instead of mandating its underlying layout.

mikebmcl commented 6 years ago

Changed all layout and other requirements for format to be implementation defined in D0267R8. Also added a mechanism for implementations to provide their own support for additional formats. Right now only three formats are part of the enum class in the proposal. They are required to be supported only for basic_image_surface:

These seem to have universal support among graphics technologies for render targets (i.e. basic_image_surface objects).

mikebmcl commented 6 years ago

That still leaves us with the compositing_op discrepancies. We'll talk offline about those.

mikekazakov commented 6 years ago

Numerically the differences between these 3 troublesome operators can be significant in some cases (especially with low alpha values), way beyond any reasonable Epsilon for calculation error. However, visually this difference is hard to perceive. I build sample images which show blending distributions: X axis defines the background color(rgba_color(x, x, x)) and Y axis defines a blending color(rgba_color(y, y, y)):

color_dodge in Cairo and in CoreGraphics: color_dodge_cairocolor_dodge_cg

color_burn in Cairo and in CoreGraphics: color_burn_cairo color_burn_cg

soft_light in Cairo and in CoreGraphics: soft_light_cairosoft_light_cg

Can anyone spot the difference? (the brightness can differ a bit, since atm CG backend saves images with the Generic RGB color profile, whiles Cairo does it with sRGB)

For comparison, this is hard_light, which is numerically equal up to epsilon, Cairo and CoreGraphics: hard_light_cairohard_light_cg

mikebmcl commented 6 years ago

I can spot it, yeah. Looking at the side-by-side images it appears we have a color banding problem. To me it's easiest to see in the soft_light images. Both have some banding, but the locations are different and, to me, it is slightly more prominent in CG.

This is likely a result of Generic RGB vs. sRGB (but could potentially be a result of PNG encoding even though PNG requires the use of the DEFLATE compression algorithm and thus should produce the same results on all conforming implementations provided that the same compression level is used).

We know there is a problem. We need any and all locations where problems appear in the process. We need to do so for all backends since we should not assume that the cairo backend or any other is correct. We locate this by writing out the raw pixel data directly and making comparisons using the backend implementation code versus the results of the reference implementation of the image file format (e.g. libPNG for PNG images). There are several steps I can think of right now that we need:

  1. We need to ensure that the reference images are correct. So, we create several algorithms that generate raw pixel data and use the reference implementations for each file format to write out that data on each platform using the same settings. We then test for their equality (excluding any header data that doesn't affect the outcome of processing those image files) vis-a-vis each platform. As long as ::std::numeric_limits<float>::is_iec559 == true there should be absolute equality here since we are simply generating data that is valid pixel data into a memory buffer, write that raw pixel data out to files, and then using the reference implementations of each image file format to save that data into the resulting image file format. This lets us know whether or not there's a problem in the implementations of the reference implementation libraries on each platform. If there's a discrepancy here it's a serious impediment to the testing process and we need to locate the reason.
  2. Once we clear the first step, we need only test each subsequent platform and implementation against the successful test result images. They will have survived testing on Windows, Linux, Darwin, etc. Next test then is to test the resulting data read from the image file format files against the raw pixel data for comparison. This is tricky because of how implementations take the data they read in from the image file format files and convert it (if needed) to the color space they use internally. Tests will thus need to take that into account and we'll need to specify at least a function call that will provide the conversion for testing.
  3. After we clear step 2, we can begin to test results against reference images. If non-trivial discrepancies still appear we will need to perform further pixel-wise examinations to find out why and if we can resolve them without giving up benefits of platform-provided functionality.