Closed GoogleCodeExporter closed 9 years ago
See issue 177 for an enhancement idea that would make it possible to implement
this
using listener interface. I keep this issue open separately since there are
other
ways to implement this than using the listener interface.
Original comment by pekka.klarck
on 10 Dec 2008 at 2:16
Changed the title since "PASS keyword" is not the only way to implement this.
Feel
free to fine tune it.
Original comment by pekka.klarck
on 10 Dec 2008 at 2:19
Martin, could you please give an example on how to use Pass keyword? We
discussed
this issue a bit more today, and couldn't figure out any situation where using
Run
Keyword If wouldn't be enough. If you can give us an example, we can see is
there
some other way to get the same effect or is Pass keyword really needed.
Original comment by pekka.klarck
on 19 Jan 2009 at 2:56
What about error testcases? We need this keyword as well, or is it possible to
do
this in Python library? Do you provide any methods somewhere to pass tests?
As I said sometimes we need to get the test to be failed as an expected result,
but
because it's expected i wish to pass it finally.
Original comment by getxs...@gmail.com
on 27 Feb 2009 at 8:31
getxsick, test cases pass if all the keywords pass i.e. lowest level keywords
don't
raise any exception. If you have a situation where you need to test that doing
something causes and expected error, then you just need to have keyword that
checks
that the error is received and raises an exception if there is no error. This
kind of
tests are sometimes called negative tests [1], but these tests themselves should
always pass.
[1] http://osdir.com/ml/programming.software-qa/2004-12/msg00060.html
Notice also that positive or negative tests have nothing to do with this issue.
The
request here is to stop test execution at some point without executing rest of
the
keywords so that the execution status is PASS.
Original comment by pekka.klarck
on 27 Feb 2009 at 8:45
One way to implement the behaviour requested in this issue using already now is
1) failing tests that should be stopped with pass status with a special error,
and
2) afterwards post-processing output XML so that tests with that error are
turned
into pass status.
A simple script like below could do the post-processing. See User Guide for more
information about RF's internal APIs, and for usage examples see e.g.
times2csv.py
and statuschecker.py tools.
from robot.output import TestSuite
def process_suite(suite):
for sub in suite.suites:
process_suite(sub)
for test in suite.tests:
if test.message = 'STOPPED WITH PASS STATUS':
test.status = 'PASS'
suite = TestSuite('output.xml')
process_suite(suite)
suite.write_to_file()
Original comment by pekka.klarck
on 27 Feb 2009 at 8:55
Because this is possible to achieve otherwise and the implementation is quite
hard,
we decided not to take action at this point.
Original comment by janne.t....@gmail.com
on 24 Feb 2010 at 10:24
Issue 495 has been merged into this issue.
Original comment by pekka.klarck
on 9 Mar 2010 at 12:23
There seems to be so much interest for this keyword that I'll reopen this issue.
Implementing this ought to be relatively easy using the same "exceptions with
special
attributes" approach as with continue on failure (issue 137) and fatal errors
(issue
366). I'd like to know some use cases where this functionality is useful before
this
is added, though.
Original comment by pekka.klarck
on 25 May 2010 at 8:30
here's our use case:
We have a whole suite of tests that test application features A, B, ... Z. Our
website can be configured in different ways for different markets, and each
market
has a unique configuration of features controlled by a config file. So, one
config
might enable A and B, but disable C and Z.
During development a developer or tester may want to run the test suite, tweak
the
configuration, run the test suite, tweak the configuration, ... But, there's no
need
to run all the tests (and get all the failures) for features that aren't
enabled. It
would be nice if each test suite could determine whether it should run or not,
and be
skipped if the features for that suite are not enabled.
So, we want to have a test that looks something like this:
skip if features are disabled | A | B | C
${result}= | use feature A | foo | bar
Should be equal | ${result} | whatever
${result}= | use feature B | foo | bar
Should be equal | ${result} | whatever
...
The "skip if features are disabled" could be designed to look in a config file
at
runtime and determine whether features A, B and C are enabled. If any are
disabled,
the test can be skipped.
Original comment by bryan.oa...@gmail.com
on 25 May 2010 at 9:05
I also would like to have a keyword like 'Stop Test Execution' ans 'Stop Suite
Execution'
'Stop Test Execution' - Should stop the execution for the test and should run
the next case.
Original comment by palani.s...@gmail.com
on 3 Jan 2011 at 3:03
Issue 812 has been merged into this issue.
Original comment by pekka.klarck
on 27 Apr 2011 at 1:25
I also would like to have stopping of test execution with PASS status (by using
some keyword). Also stopping suite execution with so other keyword would be
nice.
http://robotframework.googlecode.com/hg/doc/userguide/RobotFrameworkUserGuide.ht
ml#using-keywords
This feature would be very handy as some parts of test can be easily ignored
without need to put artificial user keyword and calling "run keyword If" for it
when some condition happens (environment does not have that feature to be
tested active etc.)
Original comment by samuli.s...@gmail.com
on 13 Jul 2011 at 1:13
Issue 981 has been merged into this issue.
Original comment by pekka.klarck
on 19 Oct 2011 at 9:57
Here is a use case for Selenium2Library. We have test for our the
Selenium2Library keywords we provide for RF. One particular test "Double Click
Element" fails for firefox. This is an open issue for the python driver 2.18.1.
So for us we would like to say something like skip if Firefox. Or even better
skip if some_arbitrary_keyword.
In our case that arbitrary keyword would check for browser = firefox and
selenium version = 2.18.1. So when 2.19 comes out (actually I think it has
already) the test would just start running again. We would want the test to
show up in the log perhaps colored yellow saying that this test was skipped and
it would be nice if we could pass in an explanation string as to why the test
was skipped.
For us though we would I think prefer a warning or skipped status and not a
pass status cause one it would serve as a good indication to end users what is
actually working for their desired configuration. This also will help us in
testing / assessing various browsers. I kind of see the skip test substituting
as a known issue flag. So we can explicitly say this issue we know about we
will revisit it next version or something like that.
Original comment by johnso...@gmail.com
on 16 Feb 2012 at 4:01
Issue 1214 has been merged into this issue.
Original comment by pekka.klarck
on 30 Aug 2012 at 9:25
Would be very good to have something like this for us as well.
Sometimes, a certain result is sufficient and doesn't require further execution
of test steps or further result evaluation.
Original comment by froschre...@gmail.com
on 19 Dec 2012 at 1:42
Original comment by jussi.ao...@gmail.com
on 6 Mar 2013 at 9:33
Although not ideal a temporary solution for exiting a keyword mid execution is
by putting everything in your keyword under a :FOR loop and use keyword 'Exit
For loop' in conjunction with 'a run keyword' if keyword.
Original comment by dbil...@gmail.com
on 26 Mar 2013 at 8:03
Original comment by mika.han...@gmail.com
on 2 Apr 2013 at 8:28
Original comment by mika.han...@gmail.com
on 15 Apr 2013 at 7:54
Original comment by tatu.ka...@eficode.com
on 8 May 2013 at 1:09
Original comment by tatu.ka...@eficode.com
on 8 May 2013 at 1:59
This issue was updated by revision d246258f2da6.
Added empty tests.
Original comment by tatu.ka...@eficode.com
on 14 May 2013 at 12:19
This issue was updated by revision 54a24b793406.
Initial implementation done with tests and some refactoring. Documentation
missing. More refactoring needed.
Original comment by tatu.ka...@eficode.com
on 22 May 2013 at 11:39
This issue was updated by revision 442ebe7c268b.
Refactored ExitForLoop to utilise _PassExecution and removed
execution_should_be_passed -property from ExecutionFailed
Original comment by tatu.ka...@eficode.com
on 22 May 2013 at 1:14
This issue was updated by revision 8265cbf4b77d.
Refactored ExecutionPassed => PassExecution as per the BuiltIn keyword
Refactored _ExecutionPassed => ExecutionPassed for clarity
Original comment by tatu.ka...@eficode.com
on 22 May 2013 at 1:14
This issue was updated by revision 58b88ad2c19b.
Fine-tuned Pass Execution: removed possibility to call the keyword without
message, updated tests. Documentation still missing.
Original comment by tatu.ka...@eficode.com
on 22 May 2013 at 1:39
This issue was updated by revision eebc44770f48.
Implemented 'Pass Execution If' keyword. Tested and documented.
Original comment by anssi.sy...@eficode.com
on 23 May 2013 at 9:10
This issue was updated by revision a6d9528a8e41.
Subclasses of ExecutionPassed now handle earlier failures correctly
Original comment by tatu.ka...@eficode.com
on 23 May 2013 at 12:34
This issue was updated by revision c8e45471d655.
Enhanced default error message in subclasses of ExecutionPassed
Original comment by robotframework@gmail.com
on 23 May 2013 at 1:14
This issue was updated by revision e65f2413d57b.
Pass Execution keyword raises error when used with empty string as message
Original comment by tatu.ka...@eficode.com
on 24 May 2013 at 9:41
This issue was updated by revision 62f4d1cac63c.
User guide documentation done. Keyword documentation missing.
Original comment by tatu.ka...@eficode.com
on 24 May 2013 at 12:28
This issue was updated by revision dd314b0d94c0.
Keyword documentation done. Issue ready for review.
Original comment by tatu.ka...@eficode.com
on 24 May 2013 at 1:04
This issue was updated by revision 5abb4f63b050.
Enhanced kw docs.
Also found some issues when looking at the code in BuiltIn:
- These keywords belong to _Control class.
- If-version should support non-existing variables in message and tags when
condition is false.
Original comment by robotframework@gmail.com
on 27 May 2013 at 9:40
This issue was updated by revision cb2f7832cbfb.
Pass Execution If keyword does not resolve variables anymore if the condition
is false. Also added an explicit test for this.
Also moved pass_execution and pass_execution_if methods into class _Control in
BuiltIn.
Original comment by anssi.sy...@eficode.com
on 27 May 2013 at 10:07
This issue was updated by revision 09ae77027cc5.
Enhanced tests of Pass Execution
Original comment by tatu.ka...@eficode.com
on 27 May 2013 at 12:07
This issue was updated by revision 8166e371004d.
Enhanced docs in UG.
Tatu is stil adding some more tests but otherwise this issue is done.
Original comment by robotframework@gmail.com
on 27 May 2013 at 3:40
This issue was updated by revision 07fd3743f505.
Enhanced tests to cover situation, where there are continuable failures in
keyword teardown
Original comment by tatu.ka...@eficode.com
on 28 May 2013 at 8:20
Original comment by tatu.ka...@eficode.com
on 28 May 2013 at 8:36
Original comment by anssi.sy...@eficode.com
on 31 May 2013 at 12:43
I am doing some check and dynamically setting the tag "SKIP" in suite setup.
I like to pass all the tc in the suite if the tag SKIP is Present.
I want to add like below in the Test Setup
Pass Execution If (tag == SKIP).
If I add it in the Test Setup it passes only the setup not the test case.
I dont want to have this check in the each test case as it will be easy if I
Can Pass the test case from Test Setup itself
"Skips rest of the current test, setup, or teardown with PASS status.
This keyword can be used anywhere in the test data, but the place where used
affects the behavior:"
Original comment by devendra...@gmail.com
on 7 Sep 2014 at 7:51
Original issue reported on code.google.com by
c.martin...@gmail.com
on 4 Dec 2008 at 10:19