fiuba08 / robotframework

Automatically exported from code.google.com/p/robotframework
Apache License 2.0
0 stars 0 forks source link

Failed tests can be re-executed with `--runfailed` option #702

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Motivation:

There are several occasions where test suites are kind of expected to have 
tests failing as false-positives. This might be due to timing issues or left 
overs of previously executed tests. However the typical process is to manually 
restart those tests to see if they pass when executed solely.

Request:

Add a command option to pybot/jybot that takes one or multiple output.xml files 
as an argument, parses these files to get a list of the failed tests, reruns 
only these failed tests and generates a new merged output.xml with the old and 
new results.

Suggested implementation details: (those might be a matter of taste though)

- The pybot option, e.g. "--RerunFailed" should take multiple output xmls
- New generated output xmls files should be the output xmls used as imput, but 
the reexecuted tests get replaced with the log data from the rerun
- If multiple xmls were specified for rerunning also multiple new output.xmls 
should be written, an additional option might be added to specify this behavior 
(log into multiple / one file(s))
- New output.xml might get automagically suffixed with e.g. "_rerun"
- Pybot or rebot could support a command line option that takes multiple 
output.xml files expecting that one of the files is the original run and the 
other ones are reruns
-- Every test that appears in at least one of the files should appear only once 
in the new output
-- Every test that appears in multiple of those files and fails in all 
occurrences should be marked as failed in the new output
-- Every test that appears in multiple of those files and passes in at least 
one occurence should be marked as passed in the new output.

Original issue reported on code.google.com by ThomasKl...@googlemail.com on 11 Nov 2010 at 10:04

GoogleCodeExporter commented 9 years ago
How about instead of an option, simply doing "pybot output.xml" by default 
reruns all failed tests. Presumably the output file has all the information it 
needs to do this (and if not, it should be added)

Original comment by bryan.oa...@gmail.com on 11 Nov 2010 at 11:15

GoogleCodeExporter commented 9 years ago
Maybe this could be combined with issue 127 (being able to use rebot to filter 
on PASS/FAIL)?
Something like this would be a very useful functionality indeed, and should 
probably be given a fair amount of thought so that it can be done nicely.
I, for one, would love to be able to re-run the last failed test cases with 
just a simple command.

Original comment by Magnus.S...@gmail.com on 12 Nov 2010 at 9:02

GoogleCodeExporter commented 9 years ago
Also for us (@COM and NAB project in NSN) it would be very helpful, to have an 
option, that failed test cases are automatically repeated and the final result 
of the Robot run combines the result of the orginal run and the re-run. This 
means a test case is marked as passed, if I passes at least in one run and a 
test case is only marked as Failed, if it fails in both cases, the original run 
and the re-run.
Our product (NAB) develeopment is done inside MPP (Merlin production pipe). 
This means we have a completelly automated continuos integration process, with 
implementation, production, packaging, installation and execution of automatic 
test cases. A lot of our test cases are implemented as GUI test cases. Even if 
we have spend already a lot of effort for stabilization of these test case, we 
face still sometimes sporadic errors (caused by sporadic problems in our 
product, in the simulator, in the used test libs (SwingLibrary) or in 
connection between the machines. Therefore the re-execution would be really 
very helpful for us, as then a Build is only marked as "Red", if the "failure" 
in the test case is reproducible.
But we would prefer a solution, where not only the Failed test cases, but the 
complete test suite (where the failed test case is included) is repeated, as we 
have sometimes dependencies between different test cases in one test suite 
(e.g.: Create NE; Display NE; Modify NE; Delete NE).

Original comment by michael....@nsn.com on 25 Nov 2010 at 10:45

GoogleCodeExporter commented 9 years ago
Too big task for 2.5.5 but will be included to 2.6. The details how this is 
implemented can be discussed when we start 2.6 development in January.

Original comment by pekka.klarck on 2 Dec 2010 at 7:44

GoogleCodeExporter commented 9 years ago
Bad news:
RF 2.6 is delayed until later this spring so it will take a bit more time 
before this feature is in.

Good news:
As part of mailing list discussion Mikko wrote a simple script that creates an 
argument file with failed tests collected from an output file [1]. A limitation 
of that script was that it didn't work well when there was test with same name 
multiple times -- in practice all of them would be executed. RF 2.5.6 supports 
specifying tests to run with their long names (issue 757) so this limitation 
can be lifted. I just did that and the enhanced version of Mikko's script is 
attached. It uses long names with RF 2.5.6 or never (also trunk version) and 
normal names with older. I hope this script is useful until we get rerun 
support into the framework itself.

[1] 
http://groups.google.com/group/robotframework-users/browse_thread/thread/4ef9c72
d3f22aa12/21f1175cbc3c0f94?lnk=gst&q=rerun+failed#21f1175cbc3c0f94

Original comment by pekka.klarck on 2 Feb 2011 at 9:30

Attachments:

GoogleCodeExporter commented 9 years ago
Unfortunately there is no time to get this for the 2.6 release. We could 
potentially add a feature that Robot can read failed tests from an earlier 
output file and execute only them, but that's something you can do already now 
with the gather_failed_tests.py script attached to the previous comment.

My understanding is that most people would like Robot to automatically rerun 
failed test as part of the original execution. Doing that is a much bigger task 
and also requires better running API (issue 826) because the current API 
doesn't support running tests multiple times as part of one execution.

Original comment by pekka.klarck on 19 Jun 2011 at 11:20

GoogleCodeExporter commented 9 years ago
Our need here is the former one you describe (read failed tests from an earlier 
output file and execute only them). 
The way we use Robot is the following. It could help you decide what you do on 
this subject.
1) QA add new test cases during the day using RIDE (executing them one by one)
1.1) during this time DEV are adding/fixing the product under tests
2) at night, the whole Robot regression tests suite run (can last 6 hours)
3) in the morning we have the report with some failed tests and we would like 
to run only the failed one (otherwise it is too long). First level of func 
would be to run it through Robot in command line like suggested by previous 
posts. Second level would be to "load" the report within RIDE to see green/ref 
on the different test case and rerun the failed/red ones.

And I tried the script gather_failed_tests.py but I did not success. My feeling 
was that I had issue because my test suites names were full of underscore that 
became whitespace... but I have to check that again.

Hope this help,
Laurent

Original comment by laurent....@gmail.com on 20 Jun 2011 at 8:10

GoogleCodeExporter commented 9 years ago
Laurent, your use case sounds interesting and definitely worth supporting. 
Unfortunately there's no time to get built-in support for it in RF 2.6 (unless 
someone outside the core team wants to look at it) but the 
gather_failed_tests.py script attached to the comment 5 ought to work for you 
too. Please check it out again a report, or on robotframework-users, whether it 
works or not.

Original comment by pekka.klarck on 20 Jun 2011 at 9:59

GoogleCodeExporter commented 9 years ago
This is such a usefull feature that its worth waiting for. Please dont drop 
this feature. I have a different way to implement that that would handle all(?) 
the scenarios.

How about having a keyword that would be run on either suit setup/testcase 
setup/teardown that instructs to rerun n times if it encounters a specific 
error.

eg,

Rerun Testcase On Error  no_of_attempts, *list_of_error

I use RF for E2E integration testing and most of our test suites run for a 
couple of hours. At the end of which we see that some of the fail randomly. I 
suspect its because of race conditions as RF + Selenium updated the web page 
very fast and it would be sad to slow it down with many sleeps'. But the 
company would pay for fixing it as its not possible to recreate manually. 

tl;dr its needed in cases where "Its not an error if its not reproduceable."

Original comment by sajjadma...@gmail.com on 23 Jun 2011 at 9:56

GoogleCodeExporter commented 9 years ago
@sajjad, implementing Rerun Testcase On Error would be pretty hard. More 
importantly, using it would be pretty hard because you needed to specify the 
keyword in the test data. I would really like to have a command line option 
like `--rerunfailed` to be able to select at run time do I want to rerun test 
or not.

Unfortunately getting this feature in any format into 2.6 is pretty much 
impossible because we would like to get that release out already next week and 
there's a lot of work left. Getting this into 2.6.1 is possible, though, 
especially if you are interested in contributing somehow.

Finally, in your SeleniumLibrary tests you probably should use some of the 
`Wait Until` keywords the library provides or wrap SeLib keywords with BuiltIn 
`Wait Until Keyword Succeeds`.

Original comment by pekka.klarck on 23 Jun 2011 at 10:18

GoogleCodeExporter commented 9 years ago
thanks Pekka for your response. Im not an expert in python else I would have 
surely helped. but still i will try. I have implemented a (very limited) 
keyword using RF that retries if a specific error happeed x times. I use this 
on sections of the testcase that are fragile to make sure the error is 
reproduceable. 

The limitations listed below can be addressed if this was done directly in the 
library. limitations of 'Rerun Testcase On Error':
1-keyword should not require arguments
2-keyword should not return any values
3-only one fixed error message 

Rerun Testcase On Error
    [Arguments]  ${n}  ${error msg}  ${keyword}
    : for  ${i}  in range  0  ${n}
    \  ${status}  ${msg}=  Run Keyword And Ignore Error  Run Keyword And Expect Error  ${error msg}  ${keyword}
    \  ${msg escaped}=  Replace String Using Regexp  ${msg}  '  \\'
    \  Run Keyword If  'Expected error \\'${error msg}\\' did not occur'=='${msg escaped}'  Exit For Loop
    \  Run Keyword If  '${error msg}'!='${msg escaped}'  fail  ${msg}
    Run Keyword If  'Expected error \\'${error msg}\\' did not occur'!='${msg escaped}'  Fail  rerun fails for keyword [${keyword}]

For the timing issues i use `Wait Until Keyword Succeeds` and this is more 
reliable than selenium Wait Until *.

Original comment by sajjadma...@gmail.com on 24 Jun 2011 at 9:59

GoogleCodeExporter commented 9 years ago
I have re-tested the gather_failed_tests.py script and it works also with long 
names and whitespaces.

Usage:
1) pybot [PATH_TO_TESTS]
2) python gather_failed_tests.py [PATH_TO_OUTPUT.XML]
3) pybot --argumentfile failed_tests.txt [PATH_TO_TESTS]

Original comment by mikko.ko...@gmail.com on 7 Nov 2011 at 1:29

GoogleCodeExporter commented 9 years ago
Initially descoped from 2.7 due to lack of time

Original comment by robotframework@gmail.com on 2 Dec 2011 at 8:42

GoogleCodeExporter commented 9 years ago
Wondering Is this feature still in track or dropped?

Original comment by shawna.q...@gmail.com on 29 Aug 2012 at 5:26

GoogleCodeExporter commented 9 years ago
I have some other idea how to solve this issue, maybe feed robot with report 
from execution (report could be also done later using rebot), its much smaller 
size then standard output.xml, and contains all information require to execute 
test one more time.

Original comment by Marcin.Koperski on 23 Oct 2012 at 9:20

GoogleCodeExporter commented 9 years ago
Wondering Is this feature still in track or dropped?

Original comment by yingat...@gmail.com on 21 Jan 2013 at 4:27

GoogleCodeExporter commented 9 years ago
Just an additional idea, tying this concept, a (very) slightly different way to 
look at it, and Jenkins.

The idea would be to parse the results from previous Jenkins test runs and run 
failing tests first. I got this idea from this blog post:
http://michaelfeathers.typepad.com/michael_feathers_blog/2012/09/precognitive-bu
ild-servers.html

The capabilities could be built up in the follow order:
 + first, just to run tests that failed in the last Jenkins build
 + run failing tests first and then run the normal script
 + run tests that failed in the last N builds, where the user can specify N

Issues would be:
 - if the test cases or specified test case pattern changed so that failed test would no longer be run
 - handling Jenkins matrix build jobs
 - tests would need to be independent with test and suite setups and tear downs as needed

The gather_failed_tests.py script with some additional scripting could be used 
in the Jenkins test job to get an initial version of this to work.

Original comment by lars.nor...@gmail.com on 6 Feb 2013 at 4:34

GoogleCodeExporter commented 9 years ago

Original comment by jussi.ao...@gmail.com on 6 Mar 2013 at 10:16

GoogleCodeExporter commented 9 years ago

Original comment by jussi.ao...@gmail.com on 6 Mar 2013 at 1:55

GoogleCodeExporter commented 9 years ago

Original comment by jussi.ao...@gmail.com on 6 Mar 2013 at 1:58

GoogleCodeExporter commented 9 years ago

Original comment by mika.han...@gmail.com on 7 Mar 2013 at 10:39

GoogleCodeExporter commented 9 years ago

Original comment by mika.han...@gmail.com on 11 Mar 2013 at 7:34

GoogleCodeExporter commented 9 years ago

Original comment by mikko.ko...@gmail.com on 10 May 2013 at 10:40

GoogleCodeExporter commented 9 years ago
Functionality should be there after re5d7333289fd.

Still needs updates to User Guide.

Original comment by mikko.ko...@gmail.com on 17 May 2013 at 9:32

GoogleCodeExporter commented 9 years ago
User Guide updates in r55bc1cd8280e and regen in rcd4dd98ec20f.

Moving to review state.

Original comment by mikko.ko...@gmail.com on 17 May 2013 at 10:00

GoogleCodeExporter commented 9 years ago
This issue was updated by revision 7b0931720b53.

Some enhancements/changes:
- Running failed from an output containing only passing tests is now an error.
- Error reporting when gathering failed fails is enhanced.
- Tests were cleaned up.

Still need to review docs but otherwise this can be considered done.

Original comment by robotframework@gmail.com on 20 May 2013 at 10:55

GoogleCodeExporter commented 9 years ago
This issue was updated by revision 910b6562d1f9.

Fine-tuned docs in User Guide and --help.

Also noticed that this is not yet tested with --include and, more importantly,
--exclude. Still want to add those tests before closing.

Original comment by robotframework@gmail.com on 20 May 2013 at 10:55

GoogleCodeExporter commented 9 years ago
Using --runfailed with --exclude is tested in revision 9f69789eb7a4. This issue 
can be considered done.

With 22 stars this is the highest voted issue in Robot Framework history. Good 
to finally close it.

Original comment by pekka.klarck on 21 May 2013 at 9:57

GoogleCodeExporter commented 9 years ago

Original comment by anssi.sy...@eficode.com on 31 May 2013 at 12:35

GoogleCodeExporter commented 9 years ago
Glad to see this feature implemented,I tried and its working fine. I am running 
the failed test from jenkins and had to do ltl tweak not sure if there is 
easier way.

I run my tests, so even if i have failure build is not exited

>> java  org.robotframework.RobotFramework --NoStatusRC -ib1 ./testsuites

rename output.xml to output_ren.xml 

>> java  org.robotframework.RobotFramework --NoStatusRC -R output_ren.xml 
./testsuites

failed tests are run and output is captured in output.xml

now i run rebot to combine results

>> java  org.robotframework.RobotFramework rebot output_ren.xml output.xml

I want to combine only passed test case from output_ren.xml and all test 
results from output.xml, how can i achieve? I could not see filtering option on 
rebot by status of test case.

I am looking at getting result in format specified in original description

-- Every test that appears in at least one of the files should appear only once 
in the new output
-- Every test that appears in multiple of those files and fails in all 
occurrences should be marked as failed in the new output
-- Every test that appears in multiple of those files and passes in at least 
one occurence should be marked as passed in the new output.

Original comment by mbdiwa...@gmail.com on 27 Jun 2013 at 7:09

GoogleCodeExporter commented 9 years ago
Hi Pekke,Miko,
any pointer on how I can achieve to create single o/p file without double 
counting failed tests, should i create new ticket?

here is what i am look for:

1) Run entire suite of a tests (b test passes, c tests fails)
2) create output_passed.xml for only b test cases
3) do a runFailed on c tests( x passes y fails)
4) combine output.xml result with output_passed.xml

Regards,
MD

Original comment by mbdiwa...@gmail.com on 10 Jul 2013 at 8:52

GoogleCodeExporter commented 9 years ago
Issue 1615 proposes adding --merge option to Rebot to allow merging original 
results with results got after rerun.

Original comment by pekka.klarck on 20 Dec 2013 at 11:58

GoogleCodeExporter commented 9 years ago
Hi All,
I am stuck...After running the python file , it generates me the txt file with 
the failed cases[Great :)] but when i rerun it , it gives me the error 
pybot_run --dut=XXX.XXX --argumentfile failed_tests.txt 

*WARN* Mon Nov 24 02:31:21 2014 Failure during pexpect 

No test script has been specified

Its the same if i use the standard pybot command.. 
Someting wrong i m doing here? 

My text file contains : --test ERTS.Advanced Malware.Amp Report.Tvh735147c
But my source file : /home/testuser/work/sarf/tests/phoebe90/antispam .

How can i retreive the source file name and test case which have failed as 
well// tis will give us the flexibitly to run it from any machine...

Thanks,
Sandeep 

Original comment by sandis...@gmail.com on 25 Nov 2014 at 5:48

GoogleCodeExporter commented 9 years ago
If i can generate the output file in the below format it would help us a lot
--test  Test Case  Source File 
eg::
--test  Tvh667811c /home/testuser/work/sarf/tests/phoebe90/alerts/alerts.txt
Any pointers how can i modify the script to acheive will be also helpful

Sandeep

Original comment by sandis...@gmail.com on 25 Nov 2014 at 6:20