maplibre / maplibre-gl-js

MapLibre GL JS - Interactive vector tile maps in the browser
https://maplibre.org/maplibre-gl-js/docs/
Other
6.58k stars 707 forks source link

Add WebGL 2 capabilities to test suites #2420

Closed birkskyum closed 1 year ago

birkskyum commented 1 year ago

Rationale

Both unit tests, integration tests and render tests currently rely heavily on the gl npm package, which is the undermaintained headless-gl. It rely on an unmaintained version of ANGLE, which is a distant fork of the the google repo.

The gl package has been good enough up until now for development to run our tests in node, but unfortunately it has only WebGL 1 support, and with WebGL 2 having wide support, and WebGPU being released within a week, we need a more modern testing story. This lack of modern support has already proved problematic in pr's like #1891 , and it'll only get worse. The google ANGLE repo actually got WebGL 2 support, not to long ago, but there is slim hope that will ever be available in the ANGLE fork used by the gl package. There has been an open WebGL 2 ticket since 2017, so I wouldn't hold my breath on that one https://github.com/stackgl/headless-gl/issues/109#issuecomment-1374537584

How does the current test suites work?

Currently, our unit tests are run with Jest in a JSDOM environment. In each unit test, the first thing we call is beforeMapTest(), which is a lot of browser mocking code we have in /src/util/util.ts. Part of that mocking process is creating a WebGL context, and that is where we pull in the gl package to help us out with that.

For render tests, we don't use JSDOM or beforeMapTest, but instead run the tests with the jest node environment, and here we additionally run the /test/integration/render/mock_browser_for_node.ts to create all the globals like global.navigator,

How could we do?

We could pull in a browser through either playwright or puppeteer, create an actual canvas, and a webgl/webgl2/webgpu context inside. This would allow us to delete all the mocking code for the integration/render tests, and have features as they get added to the evergreen browsers. The unit tests still need the mocks, because it's necessary to generate the coverage reports.

IvanSanchez commented 1 year ago

The downside of ditching headless gl is support for writing the result of pixelMatch to files (in an automated manner). AFAIK, it's not possible to let a puppeteer'd browser have access to the filesystem and write the encoded PNG artifacts to disk. And I speak from experience here: debugging WebGL in a niche platform without having access to the pixelmatch diffs is hard.

I currently run a dual puppeteer + headless gl test setup based on Jasmine at https://gitlab.com/IvanSanchez/gleo/-/tree/master/spec , and it's serving my needs quite well. Perhaps inspiration can be taken from there. I also had the forethought of splitting the rendering logic from the DOM logic, avoiding the need to need JSDOM for the headless runs; maybe something to consider for MapLibre as well.

HarelM commented 1 year ago

I think it's possible to use pixelmatch with the browser, it's just harder. I managed to do it in my project npx-maplibre-gl, an angular wrapper around maplibre with Cypress I believe. The main pain point is the time the tests will run, which is a lot longer in the browser compared to node... I've tried it with several ideas - multiple tabs, multiple browsers, but it's still too slow...

IvanSanchez commented 1 year ago

The main pain point is the time the tests will run, which is a lot longer in the browser compared to node

I beg to differ. I just ran the Gleo unit tests to time them - it's 65 seconds on headless node, versus 22 on Firefox. If maplibre is slow to be tested on browsers, that calls for some profiling.

birkskyum commented 1 year ago

I could be wrong, but it looks like it's possible to save an image from puppeteer to file in multiple ways - i.e. through screenshot:

// Capture screenshot
await page.screenshot({
  path: 'screenshot.jpg'
});

.. or by extracting the image from a response buffer: https://stackoverflow.com/questions/71103387/puppeteer-to-save-image-open-in-the-browser

Playwright even have this concept of an "Element screenshot" https://playwright.dev/docs/screenshots

IvanSanchez commented 1 year ago

Fetching the image from pixelmatch to display it on the webpage is not a big deal - I've done it before at https://gitlab.com/IvanSanchez/gleo/-/blob/master/spec/helpers/pixelmatchHelper-browser.mjs . The problem is accessing the filesystem, reliably, in whatever container out of your control the tests run in.

birkskyum commented 1 year ago

I'm probably a bit slow here, but just to clarify, is the issue to write the images, or read the images, or both?

HarelM commented 1 year ago

Just to be clear, I generally agree the tests should run in the browser as all these mocking are "dangerous". The main pain point is run time for the render tests, the rest of the tests are neglected in terms of run time. Unfortunately, I don't have the code that I wrote for running the render test in the browser, but part of the work there was to refector and simplify the tests to a single file and moving all the mocking to specific functions outside the file, this is how it is today, so wrapping the running code with playwright shouldn't be a lot of work, it didn't take me more than a few hours.

HarelM commented 1 year ago

BTW, the following is the code is the cypress driver used in ngx-maplibre-gl: https://github.com/maplibre/ngx-maplibre-gl/blob/4571694bfc0adb4b4849ddfdab50d3a38ae252d3/projects/showcase/cypress/support/e2e-driver.ts#L215 Cypress can read files from the file system, playwright can use node code prior to test execution, so I don't see a problem there, I might be missing out on something obvious though...

birkskyum commented 1 year ago

When you tested render tests in a browser and found it too slow - was that using cypress? I'm asking because I've seen cases where cypress have been significantly slower than a lighter playwright or puppeteer based solutions. There are also some write-ups like this.

HarelM commented 1 year ago

No, it was using playwright. I can work on recreating this file if you would like a starting point.

birkskyum commented 1 year ago

That would be awesome

HarelM commented 1 year ago

Caution!!! Extremely hackish code below! I think the applyOperations isn't fully functional, but the following is the run_render_tests.ts code, which playwright. Not all of the tests are passing, so in theory you'll need to investigate further, but this allows understanding how much time the render tests take when not ran in node.

/* eslint-disable no-process-exit */
//import './mock_browser_for_node';
import canvas from 'canvas';
import path, {dirname} from 'path';
import fs from 'fs';
import st from 'st';
import {PNG} from 'pngjs';
import pixelmatch from 'pixelmatch';
import {fileURLToPath} from 'url';
import {globSync} from 'glob';
import nise, {FakeXMLHttpRequest} from 'nise';
import {createRequire} from 'module';
import http from 'http';
import rtlText from '@mapbox/mapbox-gl-rtl-text';
import localizeURLs from '../lib/localize-urls';
import maplibregl from '../../../src/index';
//browser from '../../../src/util/browser';
import * as rtlTextPluginModule from '../../../src/source/rtl_text_plugin';
import CanvasSource from '../../../src/source/canvas_source';
import customLayerImplementations from './custom_layer_implementations';
import type Map from '../../../src/ui/map';
import type {StyleSpecification} from '@maplibre/maplibre-gl-style-spec';
import type {PointLike} from '../../../src/ui/camera';
import {Browser, BrowserContext, BrowserType, chromium, Page} from 'playwright';

const {fakeXhr} = nise;
const {plugin: rtlTextPlugin} = rtlTextPluginModule;
const {registerFont} = canvas;

// @ts-ignore
const __dirname = dirname(fileURLToPath(import.meta.url));
// @ts-ignore
const require = createRequire(import.meta.url);
registerFont('./node_modules/npm-font-open-sans/fonts/Bold/OpenSans-Bold.ttf', {family: 'Open Sans', weight: 'bold'});

rtlTextPlugin['applyArabicShaping'] = rtlText.applyArabicShaping;
rtlTextPlugin['processBidirectionalText'] = rtlText.processBidirectionalText;
rtlTextPlugin['processStyledBidirectionalText'] = rtlText.processStyledBidirectionalText;

type TestData = {
    id: string;
    width: number;
    height: number;
    pixelRatio: number;
    recycleMap: boolean;
    allowed: number;
    /**
     * Perceptual color difference threshold, number between 0 and 1, smaller is more sensitive
     * @default 0.1285
     */
    threshold: number;
    ok: boolean;
    difference: number;
    timeout: number;
    addFakeCanvas: {
        id: string;
        image: string;
    };
    axonometric: boolean;
    skew: [number, number];
    fadeDuration: number;
    debug: boolean;
    showOverdrawInspector: boolean;
    showPadding: boolean;
    collisionDebug: boolean;
    localIdeographFontFamily: string;
    crossSourceCollisions: boolean;
    operations: any[];
    queryGeometry: PointLike;
    queryOptions: any;
    error: Error;
    maxPitch: number;
    continuesRepaint: boolean;

    // base64-encoded content of the PNG results
    actual: string;
    diff: string;
    expected: string;
}

type RenderOptions = {
    tests: any[];
    recycleMap: boolean;
    report: boolean;
    seed: string;
}

type StyleWithTestData = StyleSpecification & {
    metadata : {
        test: TestData;
    };
}

// https://stackoverflow.com/a/1349426/229714
function makeHash(): string {
    const array = [];
    const possible = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';

    for (let i = 0; i < 10; ++i)
        array.push(possible.charAt(Math.floor(Math.random() * possible.length)));

    // join array elements without commas.
    return array.join('');
}

function checkParameter(options: RenderOptions, param: string): boolean {
    const index = options.tests.indexOf(param);
    if (index === -1)
        return false;
    options.tests.splice(index, 1);
    return true;
}

function checkValueParameter(options: RenderOptions, defaultValue: any, param: string) {
    const index = options.tests.findIndex((elem) => { return String(elem).startsWith(param); });
    if (index === -1)
        return defaultValue;

    const split = String(options.tests.splice(index, 1)).split('=');
    if (split.length !== 2)
        return defaultValue;

    return split[1];
}
/**
 * Compares the Unit8Array that was created to the expected file in the file system.
 * It updates testData with the results.
 *
 * @param directory The base directory of the data
 * @param testData The test data
 * @param data The actual image data to compare the expected to
 * @returns nothing as it updates the testData object
 */
function compareRenderResults(directory: string, testData: TestData, data: Uint8Array) {
    let stats;
    const dir = path.join(directory, testData.id);
    try {
        // @ts-ignore
        stats = fs.statSync(dir, fs.R_OK | fs.W_OK);
        if (!stats.isDirectory()) throw new Error();
    } catch (e) {
        fs.mkdirSync(dir);
    }

    const expectedPath = path.join(dir, 'expected.png');
    const actualPath = path.join(dir, 'actual.png');
    const diffPath = path.join(dir, 'diff.png');

    const width = Math.floor(testData.width * testData.pixelRatio);
    const height = Math.floor(testData.height * testData.pixelRatio);
    const actualImg = new PNG({width, height});

    // PNG data must be unassociated (not premultiplied)
    for (let i = 0; i < data.length; i++) {
        const a = data[i * 4 + 3] / 255;
        if (a !== 0) {
            data[i * 4 + 0] /= a;
            data[i * 4 + 1] /= a;
            data[i * 4 + 2] /= a;
        }
    }
    actualImg.data = data as any;

    // there may be multiple expected images, covering different platforms
    let globPattern = path.join(dir, 'expected*.png');
    globPattern = globPattern.replace(/\\/g, '/');
    const expectedPaths = globSync(globPattern);

    if (!process.env.UPDATE && expectedPaths.length === 0) {
        throw new Error(`No expected*.png files found as ${dir}; did you mean to run tests with UPDATE=true?`);
    }

    if (process.env.UPDATE) {
        fs.writeFileSync(expectedPath, PNG.sync.write(actualImg));
        return;
    }

    // if we have multiple expected images, we'll compare against each one and pick the one with
    // the least amount of difference; this is useful for covering features that render differently
    // depending on platform, i.e. heatmaps use half-float textures for improved rendering where supported
    let minDiff = Infinity;
    let minDiffImg: PNG;
    let minExpectedBuf: Buffer;

    for (const path of expectedPaths) {
        const expectedBuf = fs.readFileSync(path);
        const expectedImg = PNG.sync.read(expectedBuf);
        const diffImg = new PNG({width, height});

        const diff = pixelmatch(
            actualImg.data, expectedImg.data, diffImg.data,
            width, height, {threshold: testData.threshold}) / (width * height);

        if (diff < minDiff) {
            minDiff = diff;
            minDiffImg = diffImg;
            minExpectedBuf = expectedBuf;
        }
    }

    const diffBuf = PNG.sync.write(minDiffImg, {filterType: 4});
    const actualBuf = PNG.sync.write(actualImg, {filterType: 4});

    fs.writeFileSync(diffPath, diffBuf);
    fs.writeFileSync(actualPath, actualBuf);

    testData.difference = minDiff;
    testData.ok = minDiff <= testData.allowed;

    testData.actual = actualBuf.toString('base64');
    testData.expected = minExpectedBuf.toString('base64');
    testData.diff = diffBuf.toString('base64');
}

/**
 * Mocks XHR request and simply pulls file from the file system.
 */
function mockXhr() {
    global.XMLHttpRequest = fakeXhr.useFakeXMLHttpRequest() as any;
    // @ts-ignore
    XMLHttpRequest.onCreate = (req: FakeXMLHttpRequest & XMLHttpRequest & { response: any }) => {
        setTimeout(() => {
            if (req.readyState === 0) return; // aborted...
            const relativePath = req.url.replace(/^http:\/\/localhost:(\d+)\//, '').replace(/\?.*/, '');

            let body: Buffer | null = null;
            try {
                if (relativePath.startsWith('mvt-fixtures')) {
                    body = fs.readFileSync(path.join(path.dirname(require.resolve('@mapbox/mvt-fixtures')), '..', relativePath));
                } else {
                    body = fs.readFileSync(path.join(__dirname, '../assets', relativePath));
                }
                if (req.responseType !== 'arraybuffer') {
                    req.response = body.toString('utf8');
                } else {
                    req.response = body;
                }
                req.setStatus(req.response.length > 0 ? 200 : 204);
                req.onload(undefined as any);
            } catch (ex) {
                req.status = 404; // file not found
                req.onload(undefined as any);
            }
        }, 0);
    };
}

/**
 * Gets all the tests from the file system looking for style.json files.
 *
 * @param options The options
 * @param directory The base directory
 * @returns The tests data structure and the styles that were loaded
 */
function getTestStyles(options: RenderOptions, directory: string, port: number): StyleWithTestData[] {
    const tests = options.tests || [];

    const sequence = globSync('**/style.json', {cwd: directory})
        .map(fixture => {
            const id = path.dirname(fixture);
            const style = JSON.parse(fs.readFileSync(path.join(directory, fixture), 'utf8')) as StyleWithTestData;
            style.metadata = style.metadata || {} as any;

            style.metadata.test = Object.assign({
                id,
                width: 512,
                height: 512,
                pixelRatio: 1,
                recycleMap: options.recycleMap || false,
                allowed: 0.00025,
                threshold: 0.1285,
            }, style.metadata.test);

            return style;
        })
        .filter(style => {
            const test = style.metadata.test;
            if (tests.length !== 0 && !tests.some(t => test.id.indexOf(t) !== -1)) {
                return false;
            }

            if (process.env.BUILDTYPE !== 'Debug' && test.id.match(/^debug\//)) {
                console.log(`* skipped ${test.id}`);
                return false;
            }
            localizeURLs(style, port, path.join(__dirname, '../'));
            return true;
        });
    return sequence;
}

/**
 * Replacing the browser method of get image in order to avoid usage of context and canvas 2d with Image object...
 * @param img - CanvasImageSource
 * @param padding - padding around the image
 * @returns ImageData
 */
/*
browser.getImageData = (img: CanvasImageSource, padding = 0): ImageData => {
    // HTMLImageElement/HTMLCanvasElement etc interface in lib.dom.d.ts does not expose data property
    // @ts-ignore
    const data = img.data;
    if (!data) {
        return {width: 1, height: 1, data: new Uint8ClampedArray(1), colorSpace: 'srgb'};
    }
    const width = img.width as number;
    const height = img.height as number;

    const source = new Uint8ClampedArray(data);
    const dest = new Uint8ClampedArray((2 * padding + width) * (2 * padding + height) * 4);

    const offset = (2 * padding + width) * padding + padding;
    for (let i = 0; i < height; i++) {
        dest.set(source.slice(i * width * 4, (i + 1) * width * 4), 4 * (offset + (width + 2 * padding) * i));
    }
    return {width: width + 2 * padding, height: height + 2 * padding, data: dest, colorSpace: 'srgb'};
};

/**
 * Replacing the browser method of getImageCanvasContext in order to avoid usage of context and canvas 2d with Image object...
 * @param img - CanvasImageSource
 * @returns Mocked CanvasRenderingContext2D object
 */
/*
browser.getImageCanvasContext = (img: CanvasImageSource) : CanvasRenderingContext2D => {
    // TS ignored as we are just mocking 1 of the 60+ CanvasRenderingContext2D properties/functions.
    // @ts-ignore
    return {
        getImageData: (x, y, width, height) => {
            const imgData = browser.getImageData(img);
            const source = new Uint8ClampedArray(imgData.data);
            const sourceWidth = imgData.width;
            const dest = new Uint8ClampedArray(width * height * 4);

            for (let i = 0; i < height; i++) {
                const offset = sourceWidth * (y + i) * 4 + x * 4;
                dest.set(source.slice(offset, offset + width * 4), 4 * width * i);
            }

            return {width, height, data: dest, colorSpace: 'srgb'};
        }
    };
};
*/
function createFakeCanvas(document: Document, id: string, imagePath: string): HTMLCanvasElement {
    const fakeCanvas = document.createElement('canvas');
    const image = PNG.sync.read(fs.readFileSync(path.join(__dirname, '../assets', imagePath)));
    fakeCanvas.id = id;
    (fakeCanvas as any).data = image.data;
    fakeCanvas.width = image.width;
    fakeCanvas.height = image.height;
    return fakeCanvas;
}

function updateFakeCanvas(document: Document, id: string, imagePath: string) {
    const fakeCanvas = document.getElementById(id);
    const image = PNG.sync.read(fs.readFileSync(path.join(__dirname, '../assets', imagePath)));
    (fakeCanvas as any).data = image.data;
}

let browser = await chromium.launch();

/**
 * It creates the map and applies the operations to create an image
 * and returns it as a Uint8Array
 *
 * @param style The style to use
 * @returns an image byte array promise
 */
async function getImageFromStyle(styleForTest: StyleWithTestData): Promise<Uint8Array> {
    const width = styleForTest.metadata.test.width;
    const height = styleForTest.metadata.test.height;

    let context = await browser.newContext({
        viewport: {width, height},
        deviceScaleFactor: 2,
    });

    let page = await context.newPage();
    await page.setContent(`
<!DOCTYPE html>
<html lang="en">
<head>
    <title>Query Test Page</title>
    <meta charset='utf-8'>
    <link rel="icon" href="about:blank">
    <style>#map {
        box-sizing:content-box;
        width:${width}px;
        height:${height}px;
    }</style>
</head>
<body>
    <div id='map'></div>
</body>
</html>`);

    await page.addScriptTag({path: 'dist/maplibre-gl.js'});
    await page.addStyleTag({path: 'dist/maplibre-gl.css'});
    let evaluatedArray = await page.evaluate(async (style: StyleWithTestData) => {
        const options = style.metadata.test;

        /**
        * Executes the operations in the test data
        *
        * @param testData The test data to operate upon
        * @param map The Map
        * @param operations The operations
        * @param callback The callback to use when all the operations are executed
        */
        function applyOperations(testData: TestData, map: Map & { _render: () => void}, operations: any[], callback: Function) {
            const operation = operations && operations[0];
            if (!operations || operations.length === 0) {
                callback();

            } else if (operation[0] === 'wait') {
                if (operation.length > 1) {
                    if (typeof operation[1] === 'number') {
                        //now += operation[1];
                        map._render();
                        applyOperations(testData, map, operations.slice(1), callback);
                    } else {
                        // Wait for the event to fire
                        map.once(operation[1], () => {
                            applyOperations(testData, map, operations.slice(1), callback);
                        });
                    }
                } else {
                    const wait = function() {
                        if (map.loaded()) {
                            applyOperations(testData, map, operations.slice(1), callback);
                        } else {
                            map.once('render', wait);
                        }
                    };
                    wait();
                }
            } else if (operation[0] === 'sleep') {
                // Prefer "wait", which renders until the map is loaded
                // Use "sleep" when you need to test something that sidesteps the "loaded" logic
                setTimeout(() => {
                    applyOperations(testData, map, operations.slice(1), callback);
                }, operation[1]);
            } else if (operation[0] === 'addImage') {
                const {data, width, height} = PNG.sync.read(fs.readFileSync(path.join(__dirname, '../assets', operation[2])));
                map.addImage(operation[1], {width, height, data: new Uint8Array(data)}, operation[3] || {});
                applyOperations(testData, map, operations.slice(1), callback);
            } else if (operation[0] === 'addCustomLayer') {
                map.addLayer(new customLayerImplementations[operation[1]](), operation[2]);
                map._render();
                applyOperations(testData, map, operations.slice(1), callback);
            } else if (operation[0] === 'updateFakeCanvas') {
                const canvasSource = map.getSource(operation[1]) as CanvasSource;
                canvasSource.play();
                // update before pause should be rendered
                updateFakeCanvas(window.document, testData.addFakeCanvas.id, operation[2]);
                canvasSource.pause();
                // update after pause should not be rendered
                updateFakeCanvas(window.document, testData.addFakeCanvas.id, operation[3]);
                map._render();
                applyOperations(testData, map, operations.slice(1), callback);
            } else if (operation[0] === 'setStyle') {
                // Disable local ideograph generation (enabled by default) for
                // consistent local ideograph rendering using fixtures in all runs of the test suite.
                map.setStyle(operation[1], {localIdeographFontFamily: false as any});
                applyOperations(testData, map, operations.slice(1), callback);
            } else if (operation[0] === 'pauseSource') {
                map.style.sourceCaches[operation[1]].pause();
                applyOperations(testData, map, operations.slice(1), callback);
            } else {
                if (typeof map[operation[0]] === 'function') {
                    map[operation[0]](...operation.slice(1));
                }
                applyOperations(testData, map, operations.slice(1), callback);
            }
        }

        return await new Promise(async (resolve, reject) => {
            setTimeout(() => {
                reject(new Error('Test timed out'));
            }, options.timeout || 20000);

            if (options.addFakeCanvas) {
                const fakeCanvas = createFakeCanvas(window.document, options.addFakeCanvas.id, options.addFakeCanvas.image);
                window.document.body.appendChild(fakeCanvas);
            }
            const map = new maplibregl.Map({
                container: 'map',
                style,

                // @ts-ignore
                classes: options.classes,
                interactive: false,
                attributionControl: false,
                maxPitch: options.maxPitch,
                pixelRatio: options.pixelRatio,
                preserveDrawingBuffer: true,
                axonometric: options.axonometric || false,
                skew: options.skew || [0, 0],
                fadeDuration: options.fadeDuration || 0,
                localIdeographFontFamily: options.localIdeographFontFamily || false as any,
                crossSourceCollisions: typeof options.crossSourceCollisions === 'undefined' ? true : options.crossSourceCollisions
            });

            // Configure the map to never stop the render loop
            map.repaint = typeof options.continuesRepaint === 'undefined' ? true : options.continuesRepaint;
            //now = 0;
            //browser.now = () => {
            //    return now;
            //};

            if (options.debug) map.showTileBoundaries = true;
            if (options.showOverdrawInspector) map.showOverdrawInspector = true;
            if (options.showPadding) map.showPadding = true;

            const gl = map.painter.context.gl;

            map.once('load', () => {
                if (options.collisionDebug) {
                    map.showCollisionBoxes = true;
                    if (options.operations) {
                        options.operations.push(['wait']);
                    } else {
                        options.operations = [['wait']];
                    }
                }

                applyOperations(options, map as any, options.operations, () => {
                    const viewport = gl.getParameter(gl.VIEWPORT);
                    const w = viewport[2];
                    const h = viewport[3];

                    const data = new Uint8Array(w * h * 4);
                    gl.readPixels(0, 0, w, h, gl.RGBA, gl.UNSIGNED_BYTE, data);

                    // Flip the scanlines.
                    const stride = w * 4;
                    const tmp = new Uint8Array(stride);
                    for (let i = 0, j = h - 1; i < j; i++, j--) {
                        const start = i * stride;
                        const end = j * stride;
                        tmp.set(data.slice(start, start + stride), 0);
                        data.set(data.slice(end, end + stride), start);
                        data.set(tmp, end);
                    }

                    map.remove();
                    //gl.getExtension('STACKGL_destroy_context').destroy();
                    delete map.painter.context.gl;

                    if (options.addFakeCanvas) {
                        const fakeCanvas = window.document.getElementById(options.addFakeCanvas.id);
                        fakeCanvas.parentNode.removeChild(fakeCanvas);
                    }
                    debugger;
                    resolve(data);
                });
            });     
        });
    }, styleForTest as any);

    return new Uint8Array(Object.values(evaluatedArray as object) as number[]);
};

/**
 * Prints the progress to the console
 *
 * @param test The current test
 * @param total The total number of tests
 * @param index The current test index
 */
function printProgress(test: TestData, total: number, index: number) {
    if (test.error) {
        console.log('\x1b[31m', `${index}/${total}: errored ${test.id} ${test.error.message}`, '\x1b[0m');
    } else if (!test.ok) {
        console.log('\x1b[31m', `${index}/${total}: failed ${test.id} ${test.difference}`, '\x1b[0m');
    } else {
        console.log(`${index}/${total}: passed ${test.id}`);
    }
}

type TestStats = {
    total: number;
    errored: TestData[];
    failed: TestData[];
    passed: TestData[];
};

/**
 * Prints the summary at the end of the run
 *
 * @param tests all the tests with their resutls
 * @returns
 */
function printStatistics(stats: TestStats): boolean {
    const erroredCount = stats.errored.length;
    const failedCount = stats.failed.length;
    const passedCount = stats.passed.length;

    function printStat(status: string, statusCount: number) {
        if (statusCount > 0) {
            console.log(`${statusCount} ${status} (${(100 * statusCount / stats.total).toFixed(1)}%)`);
        }
    }

    printStat('passed', passedCount);
    printStat('failed', failedCount);
    printStat('errored', erroredCount);

    return (failedCount + erroredCount) === 0;
}

/**
 * Run the render test suite, compute differences to expected values (making exceptions based on
 * implementation vagaries), print results to standard output, write test artifacts to the
 * filesystem (optionally updating expected results), and exit the process with a success or
 * failure code.
 *
 * If all the tests are successful, this function exits the process with exit code 0. Otherwise
 * it exits with 1.
 */
const options: RenderOptions = {
    tests: [],
    recycleMap: false,
    report: false,
    seed: makeHash()
};

if (process.argv.length > 2) {
    options.tests = process.argv.slice(2).filter((value, index, self) => { return self.indexOf(value) === index; }) || [];
    options.recycleMap = checkParameter(options, '--recycle-map');
    options.report = checkParameter(options, '--report');
    options.seed = checkValueParameter(options, options.seed, '--seed');
}

//mockXhr();

const server = http.createServer(
    st({
        path: 'test/integration/assets',
        cors: true,
    })
);
await new Promise<void>((resolve) => server.listen(resolve));

const directory = path.join(__dirname);
const testStyles = getTestStyles(options, directory, (server.address() as any).port);
let index = 0;
for (const style of testStyles) {
    try {
        //@ts-ignore
        const data = await getImageFromStyle(style);
        compareRenderResults(directory, style.metadata.test, data);
    } catch (ex) {
        style.metadata.test.error = ex;
    }
    printProgress(style.metadata.test, testStyles.length, ++index);
}

const tests = testStyles.map(s => s.metadata.test).filter(t => !!t);
const testStats: TestStats = {
    total: tests.length,
    errored: tests.filter(t => t.error),
    failed: tests.filter(t => !t.error && !t.ok),
    passed: tests.filter(t => !t.error && t.ok)
};

if (process.env.UPDATE) {
    console.log(`Updated ${testStyles.length} tests.`);
    process.exit(0);
}

const success = printStatistics(testStats);

function getReportItem(test: TestData) {
    let status: 'errored' | 'failed';

    if (test.error) {
        status = 'errored';
    } else {
        status = 'failed';
    }

    return `<div class="test">
    <h2>${test.id}</h2>
    ${status !== 'errored' ? `
        <img width="${test.width}" height="${test.height}" src="data:image/png;base64,${test.actual}" data-alt-src="data:image/png;base64,${test.expected}">
        <img style="width: ${test.width}; height: ${test.height}" src="data:image/png;base64,${test.diff}">` : ''
}
    ${test.error ? `<p style="color: red"><strong>Error:</strong> ${test.error.message}</p>` : ''}
    ${test.difference ? `<p class="diff"><strong>Diff:</strong> ${test.difference}</p>` : ''}
</div>`;
}

if (options.report) {
    const erroredItems = testStats.errored.map(t => getReportItem(t));
    const failedItems = testStats.failed.map(t => getReportItem(t));

    // write HTML reports
    let resultData: string;
    if (erroredItems.length || failedItems.length) {
        const resultItemTemplate = fs.readFileSync(path.join(__dirname, 'result_item_template.html')).toString();
        resultData = resultItemTemplate
            .replace('${failedItemsLength}', failedItems.length.toString())
            .replace('${failedItems}', failedItems.join('\n'))
            .replace('${erroredItemsLength}', erroredItems.length.toString())
            .replace('${erroredItems}', erroredItems.join('\n'));
    } else {
        resultData = '<h1 style="color: green">All tests passed!</h1>';
    }

    const reportTemplate = fs.readFileSync(path.join(__dirname, 'report_template.html')).toString();
    const resultsContent = reportTemplate.replace('${resultData}', resultData);

    const p = path.join(__dirname, options.recycleMap ? 'results-recycle-map.html' : 'results.html');
    fs.writeFileSync(p, resultsContent, 'utf8');
    console.log(`\nFull html report is logged to '${p}'`);

    // write text report of just the error/failed id
    if (testStats.errored?.length > 0) {
        const erroredItemIds = testStats.errored.map(t => t.id);
        const caseIdFileName = path.join(__dirname, 'results-errored-caseIds.txt');
        fs.writeFileSync(caseIdFileName, erroredItemIds.join('\n'), 'utf8');

        console.log(`\n${testStats.errored?.length} errored test case IDs are logged to '${caseIdFileName}'`);
    }

    if (testStats.failed?.length > 0) {
        const failedItemIds = testStats.failed.map(t => t.id);
        const caseIdFileName = path.join(__dirname, 'results-failed-caseIds.txt');
        fs.writeFileSync(caseIdFileName, failedItemIds.join('\n'), 'utf8');

        console.log(`\n${testStats.failed?.length} failed test case IDs are logged to '${caseIdFileName}'`);
    }
}

process.exit(success ? 0 : 1);
birkskyum commented 1 year ago

Thanks! It's a great start. The "gl" package is also in use for many unit tests even, so there's that as well, but this will probably be the hardest nut to crack.

birkskyum commented 1 year ago

I've done some debugging, and removed most of this code to figure out what actually takes too long.

For each test, the getImageFromStyle() is run. Inside of it a template is set on the puppeteer page with a <div id="map">, in very little time. Then the page evaluates(), which contains this:

const map = new maplibregl.Map({
    container: 'map',
    style,
});

map.once('load', () => {
    // <- Some other logic here ->
    resolve(true);
});

Merely initializing the map inside puppeteer with the style, and doing nothing else, takes too long before reaching map.once('load).

@IvanSanchez , any ideas?

HarelM commented 1 year ago

Not waiting for the map to load would basically mean that the right image is not presented in the canvas, yes, is a lot faster. 😀 The map load is slow since there's a lot going on there, this is basically the time it takes for initial load, which we know we have a lot to improve there, but I'm not sure there's an easy way to cut this time. I also tried paralleling the tests using tabs or different browser instances, it didn't improve by much, unfortunately... I also tried to create the map once and only change the style between tests, but it created weird behavior, same goes for using the same page. I hope you'll find better solutions though 😀 or take my ideas to a state where they can be used.

birkskyum commented 1 year ago

Why would it take the map longer to load inside puppeteer than outside in node? That's what I can't wrap my head around

HarelM commented 1 year ago

I think DOM manipulation is slow, and it is entirely mocked when running in node, but this is just an educated guess...

birkskyum commented 1 year ago

I tried setting a test timeout to 3 sec, and ran the test suite. It took (7m 6.3s) and this was my result after:

1161 passed (93.9%) 47 failed (3.8%) 28 errored (2.3%)

The 28 errored are caused because the 3 sec timeout is hit. The tests typically timeout if they request some files that cannot reached from within puppeteer, or more specifically because the "PNG" (pngjs package) isn't available in the browser as it's made for node.

The 47 failed are likely exposing some bugs / inaccuracies that have stayed hidden until now due to differences between the mocks and a modern chrome runtime which is more likely to represent what our users will experience.

HarelM commented 1 year ago

Merge my small fix https://github.com/maplibre/maplibre-gl-js/pull/2457 and you'll be easily see using --report the different between the images.

birkskyum commented 1 year ago

Here is my report

results.html.zip

birkskyum commented 1 year ago

I see a lot of wrong fonts in the lists of failed tests. Maybe all this node-canvas with it's registerFont etc. shouldn't be used the way it is. Including the rtlTextPlugin() which maybe also should be inside puppeteer.

HarelM commented 1 year ago

Fonts are also different between OSs to some extent, I'm not in front of my PC to take a deeper look into the report, but if the difference are subtle you can update the expected image.

HarelM commented 1 year ago

Also, puppeteer is another dependency which we should avoid if we are using playwright. But for a proof of concept it doesn't matter.

birkskyum commented 1 year ago

Fix the RTL tests - another 7 tests cases pass, so now the score is:

1168 passed (94.5%) 40 failed (3.2%) 28 errored (2.3%)

results.html.zip

birkskyum commented 1 year ago

Fixed the tests using customLayerImplementation - further 4 of the errored tests resolved:

1172 passed (94.8%) 40 failed (3.2%) 24 errored (1.9%)

results.html.zip

birkskyum commented 1 year ago

Fixed the 3 canvas tests:

1175 passed (95.1%) 40 failed (3.2%) 21 errored (1.7%)

results.html.zip

birkskyum commented 1 year ago

Resolved addImage for many cases

1195 passed (96.7%) 39 failed (3.2%) 2 errored (0.2%)

results.html.zip

birkskyum commented 1 year ago

Fixed the real-world mvt tests, so we're at:

1201 passed (97.2%) 33 failed (2.7%) 2 errored (0.2%)

results.html.zip

birkskyum commented 1 year ago

Fix basic-v9 and runtime-styling tests

1210 passed (97.9%) 24 failed (1.9%) 2 errored (0.2%)

results.html.zip

birkskyum commented 1 year ago

Fix text-variable-anchor and text-offset and more

1219 passed (98.6%) 15 failed (1.2%) 2 errored (0.2%)

results.html.zip

birkskyum commented 1 year ago

Found an animation bug in applyOperations - fixed most of the easeTo tests

1226 passed (99.2%) 8 failed (0.6%) 2 errored (0.2%)

results.html.zip

At this point, most of the failed tests are just related to me having macos fonts.

birkskyum commented 1 year ago

The test '/text-writing-mode/point_label/cjk-arabic-vertical-mode' is quite tough because it needs a CJK + Arabic font, and I can't find one.

birkskyum commented 1 year ago

1228 passed (99.4%) 8 failed (0.6%)

results.html.zip

The failing tests are now all minute or due to OS-dependent font variations. My test runtime is 7:30 m (existing suite takes 1:03 on my machine) - it might be worth it, considering it would allow testing of #1891 (and webgpu when it's relevant), and there might be ways to make it slightly faster.

HarelM commented 1 year ago

Can you open a PR so we can see how much time it takes here?

birkskyum commented 1 year ago

I only set out to do an assessment of the feasibility, and my conclusion is that this is the most viable approach to gain webgl2/webgpu test support, cleaning up the mocks, and paving the way for some significant future performance gains.

Is this task time sensitive given that the webgl2 PR that is supposed to go into into 3.x depend on it?

birkskyum commented 1 year ago

Some of the unit tests might be more suited to be e2e tests. For instance the full screen control and more - basically all the tests that require the whole map mocking

HarelM commented 1 year ago

Yes and no, I think unit tests are important, and we should have both in a lot of cases - i.e. have the unit test to make sure we cover the relevant code path, and have e2e to make sure the user experience is as we want it to be.

birkskyum commented 1 year ago

The issue is that the JSDOM+Node canvas+Headless GL can include functions directly from a file, and thus are good for unit tests as it's possible to calculate the coverage, but these mocks will most likely not be able to support anything but webgl1 for years, so we're a bit up against the wall here. Just for clarification, what I call e2e tests is basically the exact same test, just ran inside of a browser using the production bundle - it's the same setup as the integration tests.

birkskyum commented 1 year ago

I tried to run it in playwright, and it took my machine 28 min, compared to the 7-8 min in puppeteer, using all the same flags, so that is a quite surprising and significant difference.

HarelM commented 1 year ago

Hmmm... Didn't expect that... Probably worth opening an issue to playwright, I don't think it's reasonable... Also would be interesting to see how much time it takes in Cypress, but it's probably a lot of effort to bootstrap it...

birkskyum commented 1 year ago

Having e2e tests for both webgl1/webgl2 using an actual browser, and for now unit tests only for webgl1 with the mocks, is a possible way to get unblocked. It's a lot better than not having any webgl2 tests if it's being added to the repo and most users would end up using it.

birkskyum commented 1 year ago

Idk with playwright. The only difference in hardware support chrome://gpu is that "canvas out-of-process rasterization" is enabled in puppeteer and not in playwright. There are also people reporting significant performance regression with the "new" headless mode since chrome 112 in playwright, but when i run it in puppeteer the time spent is the same ... here is a playwright performance issue

Playwright: playwright_gpu_stats

Puppeteer: puppeteer_gpu_stats

birkskyum commented 1 year ago

If the addition of another dependency (puppeteer) is a concern, it's not that much work to replace playwright everywhere so there still is only one headless browser. We do only test with chromium.

birkskyum commented 1 year ago

Got a run at 5m 7s using Chrome instead of Chromium with Puppeteer, and thus getting even closer to our users experience. Playwright got some catchup to do, as they don't even allow for Chrome testing at this point (only chromium), so puppeteer seems to be ahead of playwright in multiple ways currently.

HarelM commented 1 year ago

Yes, I got confused between protractor and puppeteer. Protractor was deprecated but not puppeteer. I think all tests should use the same framework as another dependency is not a great idea as you mentioned.

birkskyum commented 1 year ago

Deck.gl opted for only making coverage using webgl1, due to similar restrictions, and added puppeteer tests as well without coverage where possible.

birkskyum commented 1 year ago

I made some extrapolations that show it could take approx. ~13 m to run test-render with puppeteer on a standard github ubuntu runner. It takes just shy of 3 m. with the current mocks. Windows might take longer, as it's always very slow compared to ubuntu.

I can't see how it can become much faster, so it should be possible to gauge if this is worth investing in, or if it's a deal-breaker at this point. What do you think of it? In comparison to the Native workflows that always run for hours on aws device farms, this is still fast, but compared to the current super-fast test suites, it's definitely slower.

HarelM commented 1 year ago

The current tests runs ~13m on Ubuntu and ~18 on windows. It's not great, but it's not really long compared to everything else in the industry and in general. So as this point, you can easily split out the render tests to a different yml file and keep the current total runtime almost the same. In theory, you can probably split the render tests into roughly two groups to run in parallel in different machines to cut this in half I guess. Since in most cases you won't need to run the entire suite locally and recent upload of the report allows easy fix of the render tests, I think it will be a good path forward, removing all this mocking is probably a good idea, test run time should be a second priority. Bottom line, you have a green light. If you think this falls under a bounty direction, please suggest a bounty.

birkskyum commented 1 year ago

Very well. I'd say this task matches the "Rendering performance improvements, measurement, test and best-practice documentation." as part of the Performance direction well.

I propose a 2k bounty to lift the quality of the render tests from integration - to e2e tests, and allowing them to handle webgl2/webgpu by:

If there is a way to reach the render test report generated in the GitHub workflow that would help a lot with debugging all the failing tests there - it looks like they are being uploaded somewhere.

HarelM commented 1 year ago

Can you create another issue and/or update this one in order to have two large bounty issues? 2k is not a single bounty size I believe...

In order to reach the uploaded report you need to go to Actions, find the relevant build and download the assets from there, completely unintuitive... But that's github, not me...