Xray-App / playwright-junit-reporter

Playwright JUnit Enhanced XML reporter
Apache License 2.0
6 stars 4 forks source link

JIRA Test issues question #12

Open froibu opened 4 weeks ago

froibu commented 4 weeks ago

Hello @bitcoder @Iv3x . I'm trying to integrate a playwright framework with JIRA Xray and stumbled upon this great project. While doing some tests, I've come across a question I cannot quite answer. Let's take for e.g. the following scenario:

  1. I have no JIRA Test Issues created. I only have a Test Plan which is manually created in JIRA and that's it.
  2. I have my playwright test.spec.ts file where I define one test() function. I don't use any of the testInfo.annotations.push() calls to give JIRA Info about this test, since I have no Test Issue in JIRA. I only have my Test Plan from 1.
  3. Once the run is completed, I'm using a REST API call to: "https://{{JIRA_INSTANCE}}/rest/raven/1.0/import/execution/junit?projectKey={{Project_Key}}" and upload the generated results xml file
  4. As a result of 3, a new Test Execution Issue will be created in JIRA and also a Test Issue
  5. I link this new Test Execution with my manually created JIRA Test Plan from 1.

Until now everything is as expected, but going forward:

  1. I do another run locally, w/o any changes. The test is exactly the same
  2. I upload again the new results.xml file which was generated during this 2nd run and now, like 4. above, I'll have a new Test Execution Issue created, but no new Test Issue. The same Test Issue which was created with first call is being reused in the new Execution. Why? How does "it know" that the same test is being run since I don't provide as mentioned above any testInfo.annotations.push() for my test?

Can someone please explain this behavior? It's probably working as indented but I don't see how same test if already created is being reused even though I don't explicitly map it in my playwright test via testInfo.annotations.push()

bitcoder commented 3 weeks ago

Hi @froibu , whenever test results are imported the first time, Xray create Test entities (i.e., Test issues) corresponding to each test method. The second time (and from there onwards) Xray will find that there are already existing Tests and reports the results (i.e., the Test Runs) to them. This is by design. In Xray, a Test is a reusable entity... an abstraction of a test idea/scenario/case/script. This behaviour:

How Xray handles test automation results, is explained for example on the handling of JUnit XML reports here. In brief words, it processes the classname and name attributes on the <testcase> elements on the XML report. It uses that as a unique identifier for the test automation code. How that gets embed on the JUnit XML report, depends on the junit reporter for playwright (in this case).

I would advise having a look at these two free courses on the Xray Academy:

froibu commented 3 weeks ago

Hi @bitcoder

Thanks so much for the quick reply and clarifying my question. I would have two more question I would like you to share your thoughts upon:

  1. Let's say we have either multiple .spec.ts test files or a single .spec.ts test file with multiple test() in it. As of now, the xml report is generated only at the very end of the playwright run e.g. if a test .spec.ts file completes before the other one, I don't have a xml report. Can we have separate reports for .spec.ts test files or for individual test() within a single .spec.ts test file?

    • regarding the playwright html reporter. This is also generated as a single file containing all of the information for the overall run. If for e.g. in JIRA you have 3 tests, each mapped/linked in your playwright test() using the annotations, when the run is finished and results.xml are imported in JIRA, all of the JIRA tests will share one evidence. e.g. you open test X run evidence html and you also have info about test Y in there. How can we only show/generate the html per each test individually so those don't share other test evidence?

Also, is this the proper way to attach evidence? testInfo.attach('evidence1.txt', { path: file, contentType: 'text/plain' });

Can you please share you thoughts on this? I would really appreciate it. Thanks.

bitcoder commented 3 weeks ago

You're welcome.

  1. reports are generated at the end yes. I dont know how to make it in a different way, sorry
  2. well, the playwright-html reporter goes beyond the scope of this project; you need to probably ask on the respective project or perhaps on the playwright discord channel
  3. condering the way to attach evidence, please check this code sample

    test('Login with invalid credentials', async({ page }, testInfo) => {
        const loginPage = new LoginPage(page);
        await loginPage.navigate();
        await loginPage.login("demo","mode1");
        const name = await loginPage.getInnerText();

        //Adding Xray properties
        testInfo.annotations.push({ type: 'test_key', description: 'XT-93' });
        testInfo.annotations.push({ type: 'test_summary', description: 'Unsuccessful login.' });
        testInfo.annotations.push({ type: 'requirements', description: 'XT-41' });
        testInfo.annotations.push({ type: 'test_description', description: 'Validate that the login is unsuccessful.' });

        // Capture a screenshot and attach it.
        const path = testInfo.outputPath('tmp_screenshot.png');
        await page.screenshot({ path });
        testInfo.attachments.push({ name: 'screenshot.png', path, contentType: 'image/png' });

        expect(name).toBe('Login failed. Invalid user name and/or password.');
    });