DevExpress / testcafe

A Node.js tool to automate end-to-end web testing.
https://testcafe.io
MIT License
9.82k stars 672 forks source link

Visual regression testing #1207

Closed kirovboris closed 1 year ago

kirovboris commented 7 years ago

@DevExpress/testcafe Let's discuss functionality of layout testing

I suggest to use the following approach. Perhaps it looks too over engineered, but it is pretty flexible at my sight.

Screenshot provider

I suggest to provide possibility to use any comparing screenshots library. We do two options out-of-the-box - per-pixel comparison and perceptual hash comparison.

To avoid working with huge binary images we can calculate perceptual hash. In this case we don't create screenshot file and compare only hash value. To build the diff images use should run the local tests in per-pixel mode.

Also we can provide possibility of taking screenshot in black and white mode or ultra contrast mode. I think it can be passed to the chosen library as layoutTestOptions.

So we create:

  1. Pixel perfect provider with options

    {
    mode: 'color' | 'black_and_white',
    threshold: Number   
    }
  2. Perceptual hash with options

    {
    threshold: Number 
    // or
    exactMatching: true | false 
    //it will be depend of implementation
    }

Run options

For the run function we add options for screenshot comparing:

{
    etalonsPath: <relative_or_absolute_path>,
    layoutTestProvider: <provider_name>,
    layoutTestOptions: {...},
    updateEtalons: true | false
}

Accordingly, we add the similar options to the CLI interface: --etalons-path, --layout-provider, --update-etalons.

Test API

We provide two different ways to testing layout.

  1. We look for an etalon automatically and user should just call t.checkLayout() method.

checkLayout() is searching for a screenshot for test in according to index of using in the test code body, starting with 0.

test('testName', async t => {
    await t
        .click('#el')
        .checkLayout() //looking for .../<testName>/<workerName>/1.png        
        .type('.inputClass', 'value')
        .checkLayout() //looking for .../<testName>/<workerName>/2.png
        ......
        .click(...)
        .checkLayout() //looking for .../<testName>/<workerName>/<N>.png
});
  1. We provide testControler.Image constructor and .equalLayout() assertion. That means user should decide on their own how to store artifacts and etalons, we just use comparing logic from provider.

E.g.:

 test('testName', async t => {
     await t.expect(new t.Image(<image_path>)).equalLayout();
 });

Also we should resize browser window up to screenshot size.

Screenshots storage

Every provider implements mechanism of storing artifacts. For per-pixel provider we store screenshots similarly with current screenshots directory. Also we should create difference files in that directory.

Etalons will be got from path that specified in etalonsPath for programmatically API or from --etalons-path parameter for CLI. For hash comparison we write key-value pairs to the .json file.

{
    '<testName1><workerName1><etalonId1>': '3c3e0e1a3a1e1e2e',
    ...
    '<testNameN><workerNameN><etalonIdN>': 'ac3e0e1a3a1e1e2F'
}

Updating of etalons

As soon as the first screenshot will be different from etalon we run web application with GUI interface for managing etalons. Service will run only if was passed --update-etalons to CLI or updateEtalons to the runOptions in programmatically API. In this case we output in report only path to the difference file.

As alternative we can just output paths to the artifacts, etalons and difference files, but it does not look convenient.

inikulin commented 7 years ago

I see the following ways for API simplification: 1) Automatically generate etalons on first run 2) It would be nice to have things out of the box regarding comparison tools? Reading this proposal I have no idea which one to choose.

Overall, can we reduce API to single method?

VasilyStrelyaev commented 7 years ago

Let's not use the word etalon for base or baseline images. There is no such a word as etalon. It's even underlined by spell checkers.

mjhea0 commented 7 years ago

@VasilyStrelyaev is there a development branch for this?

kirovboris commented 7 years ago

After discussion we decide to simplify layout testing the following way:

We look for an etalon automatically and user should just call t.checkLayout() method. t.checkLayout() is searching for a screenshot for test in according to index of using in the test code body, starting with 0.

test('testName', async t => {
    await t
        .click('#el')
        .checkLayout() //looking for .../<testName>/<workerName>/1.png        
        .type('.inputClass', 'value')
        .checkLayout() //looking for .../<testName>/<workerName>/2.png
        ......
        .click(...)
        .checkLayout() //looking for .../<testName>/<workerName>/<N>.png
});

If etalon isn't exists it should be created.

For test run, you can specify path to screenshots --screenshots and path to etalons --etalons Path to etalons can be relative or absolute.

In screenshot directory we generate file with meta infotamation about screenshot comparison. If test fails, we output path to this file to the report.

Afterward you can run testcafe etalon-manager <path-to-diff-file>. It will launch the application for management the screenshots and etalons and show differences in screenshots for failed tests.

AlexanderMoskovkin commented 7 years ago

is there a development branch for this?

@mjhea0 We haven't started to work on this feature yet. Usually we work on any feature in own fork and make a pull request from there. In the upstream repository usually we don't create new branches.

mjhea0 commented 7 years ago

Thoughts on getting this into the next release? Or at the very least https://github.com/DevExpress/testcafe/issues/1357?

AlexanderMoskovkin commented 7 years ago

We've planned to fix #1357 in this release iteration. We will be able to provide you with an alpha version as soon as the fix is ready

mjhea0 commented 7 years ago

Rough example of how I'm doing the visual regression tests- https://github.com/mjhea0/testcafe-visual-regression

AlexanderMoskovkin commented 7 years ago

@mjhea0 Thanks for the link, it looks interesting

ttiurani commented 7 years ago

Any news on this since May? I'm evaluating TestCafe for a new project, and layout testing is a crucial deciding factor on TestCafe vs a Selenium based stack. I'm guessing I'm not alone.

AlexanderMoskovkin commented 7 years ago

Hi @ttiurani,

We haven't implemented this functionality yet but we keep this feature in mind and we've already made some spike solutions. Now we are planning our next releases so I can't provide you with estimates for this feature so far. Stay with us and we'll notify you about our progress.

sunainajain commented 6 years ago

waiting on this .....

mdstaff commented 6 years ago

Hi all, I'm a bit curious if there was a decision made between "pixel perfect" and "perceptual hash" methods to implement this feature. I've been looking at this interesting repository from Amex: https://github.com/americanexpress/jest-image-snapshot and it looks promising.

ttiurani commented 6 years ago

Just as an note on this: we actually ended up doing image testing against our Storybook and not with TestCafe and are very happy with our decision. We're using Storyshots image snapshots:

https://github.com/storybookjs/storybook/tree/next/addons/storyshots/storyshots-puppeteer

I would argue that for a lot of cases visual testing the Storybook stories is the right thing to do, and then TestCafe is only used for integration testing.

kirovboris commented 6 years ago

@mdstaff, @ttiurani Thanks for the idea, we'll check this out.

cbou commented 6 years ago

Any news for this issue? I would like to use Testcafe but I need visual regression...

vladnauto commented 5 years ago

We also want to implement visual regression in our project and that would be great to be able to deal with dynamic content the similar way it is done in applitools https://applitools.com/tutorials/selenium-javascript.html#part-5-use-advanced-ai-tools-to-work-with-real-world-scenarios where you can ignore region or webdriverio visual regression, https://webdriver.io/blog/2019/05/18/visual-regression-for-v5.html Where you can provide list of selectors to hide it before the test. I assume, the latter one can be done easily. By setting for each of those elements display: none, or visibility: hidden.

helen-dikareva commented 5 years ago

@vladnauto thank you for your suggestions and shared resources. I think the ability to ignore some regions or elements on layout testing would be useful, e.g. if they are constantly changing. We'll consider the implementation of this functionality.

loggen commented 5 years ago

Any updates?

miherlosev commented 5 years ago

Hi @loggen

This feature has high priority for us. At present, we are preparing the feature's spike. We will post any news in this issue. Track it to be notified of our progress.

sharathgrao commented 4 years ago

Following this.. Let us know if this feature is on track and any possible ETA.

btkostner commented 4 years ago

For what it's worth, I was able to get something to work with using the native takeElementScreenshot and the pixelmatch package. Most of the code is taken from the jest-image-snapshot package, but if you would like to take a look, it's available in one of my repositories. It is currently running in CI on Linux and Windows (not macOS due to a different issue).

If this looks promising and similar to what we want, I would be happy to start work on a branch.

Dmitry-Ostashev commented 4 years ago

@btkostner We are happy to hear that you found your own way of testing visual regression. Your approach doesn't seem to require a change in the TestCafe repository. I suppose you could represent your solution as your own separate package.

benmonro commented 4 years ago

@miherlosev any update on this? it was high priority, but what happened?

AndreyBelym commented 4 years ago

Our attempts to create a prototype revealed that this feature requires a lot more resources and time than we thought initially. Since there exist some third-party visual testing solutions, we decided to switch focus onto other big features like multi-window testing. We will think about considering this feature for the next roadmap.

dmitry-kurmanov commented 2 years ago

3d party solutions JFYI : https://www.npmjs.com/package/devextreme-screenshot-comparer https://github.com/tacoss/testcafe-blink-diff

AdrienLemaire commented 2 years ago

@dmitrykurmanov can you confirm that devextreme-screenshot-comparer is a tool made by DevExpress ? I cannot find the repo in https://github.com/orgs/DevExpress/repositories, Google doesn't return anything else than the npm package, and the package's readme/package.json do not mention a repo source. Not sure where I should go if I have issues or questions regarding this package.

alexfi1in commented 2 years ago

The repo is private and is intended for internal use only. No support is provided for it.

helen-dikareva commented 2 years ago

coverage for view request https://stackoverflow.com/questions/72867706/visual-testing-how-can-i-assess-the-code-coverage

miherlosev commented 1 year ago

Hi @folks,

At present, we support NativeAutomation mode only for Chromium-based browsers. Later, we are going to support other browsers. That said, we do not expect to implement it in the near future. You can use implementations shared by the community: testcafe-blink-diff (suitable for local comparison), @percy/testcafe (integration with https://percy.io/ image comparison infrastructure). If these are not suitable, you can create your image comparison approach with the takeScreenshot action and one of the image comparison libraries: resemble, pixelmatch, blink-diff, etc. Also, you can do it yourself and create a PR. We will be happy to review it.