Snooz82 / robotframework-datadriver

Library to provide Data-Driven testing with CSV tables to Robot Framework
Apache License 2.0
131 stars 37 forks source link

DataDriver.rerunfailed executes Passed tests if there are no failures in the original test #45

Closed eldaduzman closed 3 years ago

eldaduzman commented 3 years ago

When re-executing a test suite with a --prerunmodifier DataDriver.rerunfailed I see that it executes again all the tests if there are no failures in the original execution.

look at the attached 7zip file

When executing run_has_failure.bat I see that only the failed test is executed in the second and third attempt. However, when executing run_all_passes.bat, the three passed tests are executed again in the two reruns.

This causes massive overheads in the execution of test pipelines.

I tried it with windows 10 pro and python version 3.7.4. Versions of robotframework and robotframework-datadriver are in the requirements.txt file in the zip.

Note, I also tried it with robotframework-datadriver version 1.0.0 and got similar results.

eldaduzman commented 3 years ago

rbf-rerun-failed.7z.txt

eldaduzman commented 3 years ago

Hi @Snooz82 , thanks for the quick fix.

datadriver now only re-runs failed tests and doesn't re-run passed tests in case of all passes.

But now the problem is that the re-run process ends with an error:

[ ERROR ] Suite 'Tests All Passes' contains no tests after model modifiers

exit code if 252.

I think this is a mistake because if there's nothing to re-run the process should not return an error. This creates a lot of extra burden in the testing pipelines of error checking after every re-try.

WDYT?

Snooz82 commented 3 years ago

Hi @eldaduzman

i thought so as well, but the error is also there with the same code, when you use Robots own option --rerunfailed

So Robot brings a different Error but also Error code 252

i think DataDriver should behave same as Robot Framework.

eldaduzman commented 3 years ago

I see. I agree, the behavior should be consistent.

Is there a "best practice" when re-runing tests?