Open bezyakina opened 2 years ago
@bezyakina so you have a test like this:
test('should work', async ({ page }) => {
await page.goto('https://example.com');
await expect(page.locator('h1')).toContainText('title'); // 1
await expect(page).toHaveScreenshot(); // 2
});
This test sometimes fails on //1
, and sometimes fails on //2
.
//1
, you'd like to re-try it//2
, you'd like to avoid retries.A few questions:
maxDiffPixels
or mask
options?@aslushnikov
The problem is that the tests are run in a webview on android and use androidWebView.page()
, also on our mobile farm there are various devices on which we drive these tests. Sometimes during the test (rarely, but it happens) it may fall with an error before checking the screenshot:
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:217:20)
Test exapmle:
for (const testCase of testCases) {
test.describe('The layout is equal to snapshot', () => {
let device: AndroidDevice;
let app: AndroidApp;
let page: SearchWebAppPage;
test.beforeEach(async () => {
[device] = await android.devices();
app = new AndroidApp(device);
page = new SearchWebAppPage(device);
await page.prepareBeforeLayoutDiffTest(testCase);
});
test.afterEach(async () => {
await app.close();
});
test(`${testCase.testName}`, async ({}, testInfo) => {
const actualSnapshot = await page.getFullPageSnapshot(
testCase,
`${testCase.testSlug}-test.png`,
testInfo
);
expect.soft(actualSnapshot, 'The layout of the tested build is equal to the layout of the prod build').toMatchSnapshot(expectedSnapshotPath);
});
});
}
@bezyakina why don't you want to retry tests in case of screenshot differences? Is that to save some time?
@aslushnikov Because we take long screenshots, out of the box full-screen screenshots don't work correctly with mobile webviews, so we have to scroll, then take a screenshot, repeat and at the end combine all the screenshots into one big screenshot. This operation can take up to a minute. Since the tests are stable, if the layout is broken, then there is no point in repeating the test, because we will get the same result.
Since the tests are stable, if the layout is broken, then there is no point in repeating the test, because we will get the same result.
@bezyakina I see. So far test retries cannot be configured dynamically, and there are no immediate plans to do so.
The best workaround I can think of is to split out the screenshot tests into a separate project and configure it with no-retries policy. Would it work?
@aslushnikov We did it. There are only screenshot tests in our suite. But when the tests fall due to unstable devices connection, we will have to restart the entire test suite. The point was not to restart the entire suite. Instead, retry the only failed test directly.
I would probably try/catch the screenshot comparison. It will still produce the artifacts, but will not fail the execution. We can consider adding expect.trial
to mitigate your use case more broadly.
@bezyakina Did you try the try/catch workaround? If so, how was your experience? I am evaluating whether we should to expect.trial
or not, and if we do whether that would actually help you.
@dgozman Yes, I tried try/catch, if the error message contained the word like "diff", I changed the test status to passed. But in this way, it is difficult to separate the passed tests from those passed with a small diff in an html report.
I created #23197 which might be a duplicate of this issue.
Totally agree to this feature. We're currently using Ghostinspector and miss that feature a lot already.
A failed screenshot is rarely something with a huge severity. Some orderNr, date/time, advertising, rendering differences, animated objects,… has changed. However, just accept 95% matching misses a lot important stuff. Treshold is IMHO not useful at all and should be always 100%.
We are using this method:
Ghostinspector returns two results. Passed and ScreenshotPassed. First on is an error, last one is a warning.
In daily business we have a lot of findings in the screenshots, but like 95% of „looks different“ is a false positive.
Hi! We have screenshot tests that may have a small pixel diff or they may fall before reaching the screenshot. We would like to be able to repeat the test if it failed not because of the difference in pixels. And also we don't want to retry the test if the error is caused due to pixel difference.
I see 2 options here:
Thank you for your attention)