SeleniumHQ / selenium-ide

Open Source record and playback test automation for the web.
https://selenium.dev/selenium-ide/
Apache License 2.0
2.78k stars 752 forks source link

Detailed Test reports in the runner #564

Open erwin3 opened 5 years ago

erwin3 commented 5 years ago

🚀 Feature Proposal

Currently, a testreport written by selenium-side-runner contains info on

To get a better impression of Failures a it would be helping to also see on report

Motivation

For now to evaluate failures from automatic testruns, you always have to re-run manually and check what is happening. Would be haelping, if i don't have to rerun, but instead see from report what is happening.

Example

... CreateNewConfig × createTest (84490ms)

● CreateNewConfig › createTest ● OK - open http://xxx:1234/ ● OK - pause 1000 ● OK - click button "create" - [command:click, target:id=cfg-create-select-p1]
● OK - click the "next" button - [command:click, target:id=id-p1-wizard-next-button-text] ● OK - Input name - [command:type, target:id=cfg-setup-configname-textfield, value:testCfg1 ] ● ERROR - click the "next" button - [command:click, target:id=id-p1-wizard-next-button-text]

   5 |   await driver.wait(until.elementLocated(By.id(`id-p1-wizard-next-button-text`)), configuration.timeout);
   6 |   await driver.findElement(By.id(`id-p1-wizard-next-button-text`)).then(element => {
>  7 |     element.click();
     |             ^
   8 |   });
   9 |   await driver.wait(until.elementLocated(By.id(`id-p1-wizard-next-button-text`)), configuration.timeout);
  10 |   await driver.findElement(By.id(`id-p1-wizard-next-button-text`)).then(element => {

  at Object.checkLegacyResponse (../AppData/Roaming/npm/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/error.js:546:15)

....

-> This way, if you would see the previous success-steps in detail / what is done before the error was happening, you could much better evaluate whats the error without re-running

To be precise, the proposal is to write for each Test executed a line like < result > - < comment > - [command:< command >, target:< traget >, value:< value >]

driket commented 5 years ago

If you specify an output directory to selenium-app-runner command (--output-directory), few json files are generated.

Have you looked at those json files? They contains a lot of useful information: all test suites with included tests (statuses, errors, start/end time, etc.).

I think they can be used to generate detailed html reports but I totally agree screenshots would be a great addition.

erwin3 commented 5 years ago

I know these JSON files and have checked. They contain same content like written on stdout. It is not enough. If you have a test with 20 steps and it fails,

... This might be enough for testing static HTML pages. But for testing interactive WebApps, where content is changing during a testrun, there should be a more detailed view.

To know what is the failure you have to re-run manually.

erwin3 commented 5 years ago

Hallo again. I found you just rely on jest with reporters. There is already a similar ticket on jest project: https://github.com/facebook/jest/issues/6616

And it is still open there.

MisterGlass commented 5 years ago

This is a big problem for me currently. When debugging a test, it is very difficult which step has the error. A step number or the ID (there is a UUID for each step in the JSON file) would be enough for me determine what part of my test is running.

ianhe1 commented 4 years ago

It is also useful to add a line in the test report at the end of each test case indicating the execution time of the test case. (for example "2 minutes 5 seconds").

Same to the Test Suite. At the end of each test suite, add a line for the total test execution time of the test suite.

hiqqs commented 4 years ago

are you able to see the results from echo to console output in anyway? This would be really helpful for me

ianhe1 commented 4 years ago

Yes. That's what I have been doing.

jeremylorino commented 4 years ago

Looks valuable. I have been investigating modifying selienize for the runner so it would wrap each step in a try/catch and throw a more specific error.

Additionally, my initial POC includes the comment of the step if it was specified.

@erwin3 @MisterGlass @corevo what do y'all think of this initial approach?

corevo commented 4 years ago

The runner was not designed for these purposes, it will be a major undertaking, my thought was to use the redesigned side-cli that we develop for the new electron version, it has better reporting built in, and it has its own test framework.

But since I can't recommend people to use it yet, anything that could improve the current runner will be most welcome.

darkartswizard commented 4 years ago

Adding my two cents:

The first feature I add to my framework designs is execution time of both tests and suites.

It gives me a historical metric telling me if my modifications have helped or hindered the overall response time of the framework when I rerun tests. Eliminates time used to do the math between a start and end time/date stamp from TestNG.

While I like Ian’s plain English version of “Two minutes, 5 seconds” a simple hours, minutes, seconds timestamp “00:02:05” would be fine.

And as Jeremy notes, I too will add additional comments to my POCs step output that gives further detailed descriptions.

It might be identifying a step that enters intentional invalid test data like “08/26/2” with “Entering Invalid date format” detail.

This keeps me from wasting time erroneously thinking my custom data entry mechanism is flaky. It also makes the output more understandable to the client during demos.

Paul

On Tue, Aug 18, 2020 at 4:09 AM Tomer Steinfeld notifications@github.com wrote:

The runner was not designed for these purposes, it will be a major undertaking, my thought was to use the redesigned side-cli that we develop for the new electron version, it has better reporting built in, and it has its own test framework.

But since I can't recommend people to use it yet, anything that could improve the current runner will be most welcome.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/SeleniumHQ/selenium-ide/issues/564#issuecomment-675360115, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMDP3YKAUE77R6FENEG5ZP3SBJANBANCNFSM4GS656RQ .

jeremylorino commented 4 years ago

The runner was not designed for these purposes, it will be a major undertaking, my thought was to use the redesigned side-cli that we develop for the new electron version, it has better reporting built in, and it has its own test framework.

But since I can't recommend people to use it yet, anything that could improve the current runner will be most welcome.

I will have to take a look as I would like to separate concerns. Let the generator generate, the runner run, the tester test.

corevo commented 4 years ago

Initially I built a js code export, that I then repurposed for a cli runner, since we'd figure if we'd do that we'd have closer results to extension version of the IDE than to simply export it.

The upside was obviously the huge time save, the downside is that it created a codebase that is not very modular for what the users needed, we can try and take what I've done in the electron version and use that as the new selenium-side-runner, and built all these features into that, but eventually they will have to split again down the road.

I think that if you'd go down the rabbit hole of separating everything you'd reach something very similar, you can take a look at what is already done, side-cli basically works with current project files, the problem is that at some point support will break, which is why I don't consider it a replacement until we can ship electron.

jeremylorino commented 4 years ago

Makes sense I'll take a look at it and see where my time is best spent

Thanks for all the info

manikantayarramsetti1 commented 1 year ago

💬 After running selenium ide test, How to generate html report

toddtarsi commented 1 year ago

@manikantayarramsetti1 - v4 of side runner exposes jest options so you can configure your report via the jest test runner