avocado-framework / avocado

Avocado is a set of tools and libraries to help with automated testing. One can call it a test framework with benefits. Native tests are written in Python and they follow the unittest pattern, but any executable can serve as a test.
https://avocado-framework.github.io/
Other
342 stars 339 forks source link

unable to skip test in class when using method of class #1815

Closed jscotka closed 7 years ago

jscotka commented 7 years ago

I have trouble, that there are is variants how to get rpm packages (docker, rpm based system) and I Would like to skip test dependent on installed package. I have code like:

import socket
from avocado import main
from avocado import skipIf
import moduleframework

class SanityCheck1(moduleframework.AvocadoTest):
    """
    :avocado: enable
    """
    def test1(self):
        self.start()
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.connect(('localhost', self.getConfig()['service']['port']))
        s.sendall('set Test 0 100 4\r\n\n')
        s.close()

    def test2(self):
        self.start()
        self.run("ls / | grep bin")

    @skipIf("gcc" not in self.getActualPackages(), "gcc is not installed")
    def test3skipped(self):
        self.start()
        self.run("gcc -v")

I'm unbale to solve this in elegant way now.

ldoktor commented 7 years ago

Let me just add that @jscotka's moduleframework.AvocadoTest contains several backends so the self.getActualPackages() depends on setUp. We briefly talked about this and the only proper way I came up with was:

def setUp(self):
    if self._testMethodName in ("test3skipped",):
        if "gcc" not in self.getActualPackages():
            self.skip("gcc is not installed")

which will become ugly with multiple test methods and even worse with several requirements. For reasons like this I was against disabling self.skip in test. In some examples it's not really nice practice to skip tests from inside test's body, but in real life it's sometimes the cleanest solution.

apahim commented 7 years ago

Hello @jscotka,

The decorators are function wrappers and they receive only the subsequent function as argument. It means that the decorators does not have access to the class (self) instance. But still, you can create, for example, a module variable with the information you want to test in the skipIf(). Example:

import mylib
import avocado

PACKAGES = mylib.getActualPackages()

class MyTest(avocado.Test):
    @avocado.skipIf("gcc" not in PACKAGES, "gcc is not installed")
    def test(self):
        pass
$ cat mylib.py
def getActualPackages():
    return ['notgcc', 'SomethingElse']
$ avocado run test_decorator_jscotka.py
JOB ID     : 4b051b1f4d364f7c18c8ab7b19631d52a597e361
JOB LOG    : /home/apahim/avocado/job-results/job-2017-02-23T14.11-4b051b1/job.log
 (1/1) test_decorator_jscotka.py:MyTest.test: SKIP
RESULTS    : PASS 0 | ERROR 0 | FAIL 0 | SKIP 1 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.00 s
JOB HTML   : /home/apahim/avocado/job-results/job-2017-02-23T14.11-4b051b1/html/results.html
jscotka commented 7 years ago

Hi @apahim , it is not possible to create similar module method. In my case it is dependent on using avocado multiplexer. So I cannot dereference params of test, just in Test itself. And I have various behaviour of class depending on backend (rpm,s docker instances) I found just one solution in source code of avocado what seems more less elegant:

    def test3skipped(self):
        if "gcc" not in self.getActualPackages():
            raise exceptions.TestDecoratorSkip("gcc is not installed")

it is closest to using decorators around. Proposed solution by @ldoktor is vere ugly as he mentioned in case more variants and lots of tests in one class.

apahim commented 7 years ago

@jscotka the problem with using Avocado internal APIs is that they are not guaranteed to be stable. What you're proposing is considered a hack by us. Yes, it can indeed work to skip the test from within the test currently, but I'd not recommend to use the Avocado internal exceptions in your code. For instance, this very exception handler you're taking advantage in your example was just modified a couple of weeks ago. If you were using it before, your test would be probably broken now.

Considering the Avocado architecture, the solution proposed by @ldoktor is the only supported one we have for your use case. The only alternative I see is to use self.fail("gcc is not installed") inside your test. By design (in Avocado), once the test is started, you cannot skip it anymore. So, instead, you should consider the test as FAIL.

ldoktor commented 7 years ago

Well the question is whether we should not reconsider this decision (you know, I was never big fan of that). I used to be used to be able to skip tests even when the test started before and this example looks like a good fit. @clebergnu, @adereis what do you say? I know my solution is cleaner, but it is actually not 100% clean either. It uses unittest internal API to get the current testing method and with more requirements it becomes hard to read.

One more (important) thing for @jscotka and a bugreport for us. Skipped tests are not suppose to run tearDown method. This is only true for test's skipped from setUp, the ones which use decorators currently run the tearDown. (again, it was a design decision that setUp skips should clean before using the self.skip and yes, I'd prefer it to be the other way around. Anyway I think this requires a review)

adereis commented 7 years ago

On Thu, Feb 23, 2017 at 07:33:41AM -0800, Lukáš Doktor wrote:

Well the question is whether we should not reconsider this decision (you know, I was never big fan of that). I used to be used to be able to skip tests even when the test started before and this example looks like a good fit. @clebergnu, @adereis what do you say? I know my solution is cleaner, but it is actually not 100% clean either. It uses unittest internal API to get the current testing method and with more requirements it becomes hard to read.

A test that has dependencies which can't be figured out prior to running the main test can't be skipped in current Avocado. But it can be aborted with an error.

Skipping a test or not is a test-runner decision. A test can't skip itself by definition. If the test is being run, then it can't be skipped, but it can be aborted (again, by definition).

The proper solution is a mechanism for declaring test dependencies. Something we've always planned, but haven't delivered yet.

So with today's Avocado, if one is writing a test suite where dependencies can't be figured out prior to test run-time, one has to get used to see errors reported in the test results. And remember: ERROR != FAIL.

One more (important) thing for @jscotka and a bugreport for us. Skipped tests are not suppose to run tearDown method. This is only true for test's skipped from setUp, the ones which use decorators currently run the tearDown. (again, it was a design decision that setUp skips should clean before using the self.skip and yes, I'd prefer it to be the other way around. Anyway I think this requires a review)

This is a bug. I don't remember participating in the discussions that led to this, but IMO, skipped tests are not considered run and therefore there should be no tear-down.

I strongly discourage skipping tests at setUp, anyway. The only reason why it's possible to skip a test during setUp is because we don't have a mechanism for declaring test dependencies yet, so we ended up delivering a compromise hackish solution.

I would rather see a PreCheck() method that is run at loading/resolving time, with access to test params from the Varianter. The test runner runs this method for each test (if it exists) and if it fails, the test runner skips the test.

clebergnu commented 7 years ago

A quick overview of relevant points:

My suggestion is to add a IGNORE status. It's meaning (definition) should be pretty clear: one should not depend on its outcome. Yet, some (or all of it) was executed. I see the effort to support this type of status as pretty small.

clebergnu commented 7 years ago

BTW, in addition to such a status, test suite writers would be able to check test dependencies and SKIP them outside the context of the test, when such a mechanism is introduced.

adereis commented 7 years ago

On Thu, Feb 23, 2017 at 09:04:08AM -0800, Cleber Rosa wrote:

A quick overview of relevant points:

  • We have tried hard to enforce the meaning (definition) of SKIP. To skip a test means that the test code is never run.
  • There's a need to ignore the outcome of a test, which has been mistakenly treated as the same thing as skipping a test.
  • @adereis suggested ERRORing the test, but this also carries a specific meaning (see recently added docs: http://avocado-framework.readthedocs.io/en/46.0/ReferenceGuide.html#test-statuses

(I agree with the ERROR description from the documentation)

IIRC, ERRORs are mapped to FAILs in XUnit, correct? (I see XUnit also has error attribute somewhere, but they're outside of the scope of a test).

My suggestion is to add a IGNORE status. It's meaning (definition) should be pretty clear: one should not depend on its outcome. Yet, some (or all of it) was executed. I see the effort to support this type of status as pretty small.

And I guess we would map IGNOREs to SKIPs in XUnit, which makes sense to me and will solve the problem originally reported.

I also assume a new API would be introduced: Test::ignore(), while Test::skip() would be deprecated in favor of it (this way we can officially get rid of the setUp hack -- the only way to skip a test would be via decorator or in the future, using some sort of pre-check mechanism, like a dependency check).

I think it's a fine solution, but I would call it ABORTED and Test::abort() respectively, because test statuses represent a status, not a decision (PASS, FAIL, ERROR, SKIP, ABORT --> statuses, or results of a test run; IGNORE --> a decision to be made based on a status).

Also, Test::abort() would not call tearDown()... Maybe this can be configured via a paramter, resulting in something like Test::abort(runTearDown=False).

This is a clean solution that doesn't break any semantics or architecture. No need for exceptions in documentation or code and the behavior is intuitive. I like it. :-)

-- Ademar Reis Red Hat

^[:wq!

jscotka commented 7 years ago

Hi, the issue is, that I need to find the most elegant solution now up to next week. We've discussed it with @ldoktor that avocado API could solve my issues, but problem is, that it does not exist nowadays. So the cleanest solution for is is:

    def test3skipped(self):
        if "gcc" not in self.getActualPackages():
            raise exceptions.TestDecoratorSkip("gcc is not installed")

or could be using self.skip inside each test, what is not permitted.

Some notes about states: using ERROR state for this is very bad idea -> error means also that something bad happen inside, so I need to inspect it. I don't want to inspect these skipped testscases IGNORE - yes this could be okay to use some similar state instead of SKIP SKIP is actually the best state for this situation, I can write some wrapper inside my test framework, to be able to fix this issue in future, by redefining wrapped, instead of changing every test what uses that.

ldoktor commented 7 years ago

Dne 23.2.2017 v 19:01 Ademar de Souza Reis Jr napsal(a):

On Thu, Feb 23, 2017 at 09:04:08AM -0800, Cleber Rosa wrote:

A quick overview of relevant points:

  • We have tried hard to enforce the meaning (definition) of SKIP. To skip a test means that the test code is never run. The problem here is that one class can define only one setUp but many test*s. In this case each test* has different requirements, which would lead to series of if name... which is IMO big step back to non OO languages.

  • There's a need to ignore the outcome of a test, which has been mistakenly treated as the same thing as skipping a test.

  • @adereis suggested ERRORing the test, but this also carries a specific meaning (see recently added docs: http://avocado-framework.readthedocs.io/en/46.0/ReferenceGuide.html#test-statuses

(I agree with the ERROR description from the documentation)

IIRC, ERRORs are mapped to FAILs in XUnit, correct? (I see XUnit also has error attribute somewhere, but they're outside of the scope of a test). XUnit supports both, ERROR and FAIL. The only missing mapping is WARN which maps to PASS in XUnit.

My suggestion is to add a IGNORE status. It's meaning (definition) should be pretty clear: one should not depend on its outcome. Yet, some (or all of it) was executed. I see the effort to support this type of status as pretty small.

And I guess we would map IGNOREs to SKIPs in XUnit, which makes sense to me and will solve the problem originally reported. This is just playing with words, but if that makes you happy I'll support this. I don't like the name, though, how about CANCEL? Anyway it should map to SKIP as most frameworks support this while our status would be finer-grained.

I also assume a new API would be introduced: Test::ignore(), while Test::skip() would be deprecated in favor of it (this way we can officially get rid of the setUp hack -- the only way to skip a test would be via decorator or in the future, using some sort of pre-check mechanism, like a dependency check).

I think it's a fine solution, but I would call it ABORTED and Test::abort() respectively, because test statuses represent a status, not a decision (PASS, FAIL, ERROR, SKIP, ABORT --> statuses, or results of a test run; IGNORE --> a decision to be made based on a status).

Also, Test::abort() would not call tearDown()... Maybe this can be configured via a paramter, resulting in something like Test::abort(runTearDown=False).

This is a clean solution that doesn't break any semantics or architecture. No need for exceptions in documentation or code and the behavior is intuitive. I like it. :-)

-- Ademar Reis Red Hat

^[:wq!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/avocado-framework/avocado/issues/1815#issuecomment-282070651, or mute the thread https://github.com/notifications/unsubscribe-auth/ABBjZfFammWoM1HH0zQkTZgeShKV-HQnks5rfcmGgaJpZM4MJ6Ze.

ldoktor commented 7 years ago

One more (important) thing for @jscotka and a bugreport for us. Skipped tests are not suppose to run tearDown method. This is only true for test's skipped from setUp, the ones which use decorators currently run the tearDown. (again, it was a design decision that setUp skips should clean before using the self.skip and yes, I'd prefer it to be the other way around. Anyway I think this requires a review)

This is a bug. I don't remember participating in the discussions that led to this, but IMO, skipped tests are not considered run and therefore there should be no tear-down. Yep, that's why I pointed it out (I noticed it while looking at this issue, https://trello.com/c/gsdBAmMY/937-bug-avoid-running-teardown-when-test-is-skipped-by-decorator )

My point in re-evaluating this was because it's way simpler to write tearDown which checks whether the resource was created than to write setUps which clean-ups before raising self.skip and it is really not possible to safely cleanup from within the skipIf decorator.

I understand why we defined it this way, it's because we expected to deliver another dependency mechanism, which would happen before allocating resources, but this didn't happened and I don't see this happening any time soon. I hope we add the CANCEL (or IGNORE) status which will execute the tearDown (would it?) and people will have mechanisms they are used to.

Btw I'm not sure we ever write a dep solver which would be able to query this kind of tests (which execute on VM, Docker or localhost depending on params) before setUp, do you?

Lukáš

clebergnu commented 7 years ago

2017-02-24 1:56 GMT-05:00 jscotka notifications@github.com:

Hi, the issue is, that I need to find the most elegant solution now up to next week. We've discussed it with @ldoktor https://github.com/ldoktor that avocado API could solve my issues, but problem is, that it does not exist nowadays. So the cleanest solution for is is:

def test3skipped(self):
    if "gcc" not in self.getActualPackages():
        raise exceptions.TestDecoratorSkip("gcc is not installed")

or could be using self.skip inside each test, what is not permitted.

Some notes about states: using ERROR state for this is very bad idea -> error means also that something bad happen inside, so I need to inspect it. I don't want to inspect these skipped testscases IGNORE - yes this could be okay to use some similar state instead of SKIP SKIP is actually the best state for this situation, I can write some wrapper inside my test framework, to be able to fix this issue in future, by redefining wrapped, instead of changing every test what uses that.

Writing the wrapper seems like a good approach. As always, we'll try to respond to user requests such as yours with increased priority.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/avocado-framework/avocado/issues/1815#issuecomment-282219068, or mute the thread https://github.com/notifications/unsubscribe-auth/AAbSygXar5bbRYFHnWD_tlUbNygd7p56ks5rfn8qgaJpZM4MJ6Ze .

adereis commented 7 years ago

On Thu, Feb 23, 2017 at 10:56:42PM -0800, jscotka wrote:

Hi, the issue is, that I need to find the most elegant solution now up to next week. We've discussed it with @ldoktor that avocado API could solve my issues, but problem is, that it does not exist nowadays. So the cleanest solution for is is:

    def test3skipped(self):
        if "gcc" not in self.getActualPackages():
            raise exceptions.TestDecoratorSkip("gcc is not installed")

or could be using self.skip inside each test, what is not permitted.

These solutions are both stamped with NACKs, AFAICS. Pushing for this will lead to nowhere.

Some notes about states: using ERROR state for this is very bad idea -> error means also that something bad happen inside, so I need to inspect it. I don't want to inspect these skipped testscases IGNORE - yes this could be okay to use some similar state instead of SKIP SKIP is actually the best state for this situation, I can write some wrapper inside my test framework, to be able to fix this issue in future, by redefining wrapped, instead of changing every test what uses that.

I think IGNORE is the wrong name (as stated before, I would call it ABORT), but is the only acceptable solution IMO. It'll be mapped to SKIPPED in XUnit.

-- Ademar Reis Red Hat

^[:wq!

adereis commented 7 years ago

On Fri, Feb 24, 2017 at 01:27:01AM -0800, Lukáš Doktor wrote:

Dne 23.2.2017 v 19:01 Ademar de Souza Reis Jr napsal(a):

On Thu, Feb 23, 2017 at 09:04:08AM -0800, Cleber Rosa wrote:

A quick overview of relevant points:

  • We have tried hard to enforce the meaning (definition) of SKIP. To skip a test means that the test code is never run. The problem here is that one class can define only one setUp but many test*s. In this case each test* has different requirements, which would lead to series of if name... which is IMO big step back to non OO languages.

  • There's a need to ignore the outcome of a test, which has been mistakenly treated as the same thing as skipping a test.

  • @adereis suggested ERRORing the test, but this also carries a specific meaning (see recently added docs: http://avocado-framework.readthedocs.io/en/46.0/ReferenceGuide.html#test-statuses

(I agree with the ERROR description from the documentation)

IIRC, ERRORs are mapped to FAILs in XUnit, correct? (I see XUnit also has error attribute somewhere, but they're outside of the scope of a test). XUnit supports both, ERROR and FAIL. The only missing mapping is WARN which maps to PASS in XUnit.

Hmm... XUnit is confusing, it's hard to find an authoritative reference for the XML.

So PASS, FAIL, ERROR and SKIP are the valid test statuses from XUnit?

My suggestion is to add a IGNORE status. It's meaning (definition) should be pretty clear: one should not depend on its outcome. Yet, some (or all of it) was executed. I see the effort to support this type of status as pretty small.

And I guess we would map IGNOREs to SKIPs in XUnit, which makes sense to me and will solve the problem originally reported. This is just playing with words, but if that makes you happy I'll support this. I don't like the name, though, how about CANCEL? Anyway it should map to SKIP as most frameworks support this while our status would be finer-grained.

What's the reason for you not to like ABORT? Please don't tell me it's just taste. :-)

My point is simple: the status should represent what happened with the test. One doesn't "cancel" its own execution, one aborts it.

also:

Same thing for:

self.abort(): abort its own execution self.cancel(): cancel its own execution self.skip(): skip its own execution (!) self.ignore(): ignore its own execution (!)

Mapping ABORT to SKIP in Xunit is a bit of a stretch, but I would be OK with it as a compromise solution. It certainly is much much cleaner than using skip in setUp() or elsewhere.

I also assume a new API would be introduced: Test::ignore(), while Test::skip() would be deprecated in favor of it (this way we can officially get rid of the setUp hack -- the only way to skip a test would be via decorator or in the future, using some sort of pre-check mechanism, like a dependency check).

I think it's a fine solution, but I would call it ABORTED and Test::abort() respectively, because test statuses represent a status, not a decision (PASS, FAIL, ERROR, SKIP, ABORT --> statuses, or results of a test run; IGNORE --> a decision to be made based on a status).

Also, Test::abort() would not call tearDown()... Maybe this can be configured via a paramter, resulting in something like Test::abort(runTearDown=False).

This is a clean solution that doesn't break any semantics or architecture. No need for exceptions in documentation or code and the behavior is intuitive. I like it. :-)

-- Ademar Reis Red Hat

^[:wq!

apahim commented 7 years ago

On Fri, Feb 24, 2017 at 2:28 PM, Ademar de Souza Reis Jr < notifications@github.com> wrote:

On Fri, Feb 24, 2017 at 01:27:01AM -0800, Lukáš Doktor wrote:

Dne 23.2.2017 v 19:01 Ademar de Souza Reis Jr napsal(a):

On Thu, Feb 23, 2017 at 09:04:08AM -0800, Cleber Rosa wrote:

A quick overview of relevant points:

  • We have tried hard to enforce the meaning (definition) of SKIP. To skip a test means that the test code is never run. The problem here is that one class can define only one setUp but many test*s. In this case each test* has different requirements, which would lead to series of if name... which is IMO big step back to non OO languages.

  • There's a need to ignore the outcome of a test, which has been mistakenly treated as the same thing as skipping a test.

  • @adereis suggested ERRORing the test, but this also carries a specific meaning (see recently added docs: http://avocado-framework.readthedocs.io/en/46.0/ ReferenceGuide.html#test-statuses

(I agree with the ERROR description from the documentation)

IIRC, ERRORs are mapped to FAILs in XUnit, correct? (I see XUnit also has error attribute somewhere, but they're outside of the scope of a test). XUnit supports both, ERROR and FAIL. The only missing mapping is WARN which maps to PASS in XUnit.

Hmm... XUnit is confusing, it's hard to find an authoritative reference for the XML.

We use the unit-4.xsd as official template. We actually validate our xunit output against it in our selftests: https://github.com/avocado-framework/avocado/blob/master/selftests/unit/junit-4.xsd

So PASS, FAIL, ERROR and SKIP are the valid test statuses from XUnit?

Yep, those 4. In fact the PASS status is omitted. We have errors, failures, skipped and (total-)tests.

My suggestion is to add a IGNORE status. It's meaning (definition) should be pretty clear: one should not depend on its outcome. Yet, some (or all of it) was executed. I see the effort to support this type of status as pretty small.

And I guess we would map IGNOREs to SKIPs in XUnit, which makes sense to me and will solve the problem originally reported. This is just playing with words, but if that makes you happy I'll support this. I don't like the name, though, how about CANCEL? Anyway it should map to SKIP as most frameworks support this while our status would be finer-grained.

What's the reason for you not to like ABORT? Please don't tell me it's just taste. :-)

My point is simple: the status should represent what happened with the test. One doesn't "cancel" its own execution, one aborts it.

  • The test being ran decided to abort its own execution
  • The test being ran decided to cancel its own execution
  • The test being ran decided to ignore its own execution (!)
  • The test being ran decided to skip its own execution (!)

also:

  • The test runner decided to abort the execution of a test
  • The test runner decided to cancel the execution of a test
  • The test runner decided to ignore the execution of a test (!)
  • The test runner decided to skip the execution of a test

Same thing for:

self.abort(): abort its own execution self.cancel(): cancel its own execution self.skip(): skip its own execution (!) self.ignore(): ignore its own execution (!)

Mapping ABORT to SKIP in Xunit is a bit of a stretch, but I would be OK with it as a compromise solution. It certainly is much much cleaner than using skip in setUp() or elsewhere.

It's hard for me to agree that an ABORT will be mapped to a SKIP. We have in Avocado the concept of 'good' and 'bad' status. A SKIP is considered a good status, which will not count to make the Avocado execution to exit with an exit_codes.AVOCADO_TESTS_FAIL in the final status. Aborting the test from inside the test should reflect in a 'bad' test status. Mapping it to SKIP means it will be internally considered a 'good' status and will make Avocado execution to finish with 0 (if nothing else bad happens). But as you said, it's a compromise solution.

I also assume a new API would be introduced: Test::ignore(), while Test::skip() would be deprecated in favor of it (this way we can officially get rid of the setUp hack -- the only way to skip a test would be via decorator or in the future, using some sort of pre-check mechanism, like a dependency check).

I think it's a fine solution, but I would call it ABORTED and Test::abort() respectively, because test statuses represent a status, not a decision (PASS, FAIL, ERROR, SKIP, ABORT --> statuses, or results of a test run; IGNORE --> a decision to be made based on a status).

Also, Test::abort() would not call tearDown()... Maybe this can be configured via a paramter, resulting in something like Test::abort(runTearDown=False).

This is a clean solution that doesn't break any semantics or architecture. No need for exceptions in documentation or code and the behavior is intuitive. I like it. :-)

-- Ademar Reis Red Hat

^[:wq!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/avocado-framework/avocado/issues/1815#issuecomment-282290993, or mute the thread https://github.com/notifications/unsubscribe-auth/AEf8vUxwwWkPGEXriGI65RHk2qHVI9SFks5rftsCgaJpZM4MJ6Ze .

adereis commented 7 years ago

On Fri, Feb 24, 2017 at 01:37:27AM -0800, Lukáš Doktor wrote:

One more (important) thing for @jscotka and a bugreport for us. Skipped tests are not suppose to run tearDown method. This is only true for test's skipped from setUp, the ones which use decorators currently run the tearDown. (again, it was a design decision that setUp skips should clean before using the self.skip and yes, I'd prefer it to be the other way around. Anyway I think this requires a review)

This is a bug. I don't remember participating in the discussions that led to this, but IMO, skipped tests are not considered run and therefore there should be no tear-down. Yep, that's why I pointed it out (I noticed it while looking at this issue, https://trello.com/c/gsdBAmMY/937-bug-avoid-running-teardown-when-test-is-skipped-by-decorator )

My point in re-evaluating this was because it's way simpler to write tearDown which checks whether the resource was created than to write setUps which clean-ups before raising self.skip and it is really not possible to safely cleanup from within the skipIf decorator.

This is exactly the kind of problem that we can avoid in the first place by not allowing broken concepts like a "test skipping itself".

I don't know if this is clear to you, but notice the difference in the way humans think about these things: when introducing skip, we ended up with bugs (like this one) and maybe others because of the broken semantics. The reason is simple: that's how the human brain works. Skip has a clear semantics, which won't trigger thoughts of cleanup or continued execution.

Ditto for abort(), which also has clear semantics. The first thing I though in my previous email when I proposed abort was "how do we cleanup the status of a test". Again, because abort has a clear semantics and forces our brains to think of the consequences.

I understand why we defined it this way, it's because we expected to deliver another dependency mechanism, which would happen before allocating resources, but this didn't happened and I don't see this happening any time soon. I hope we add the CANCEL (or IGNORE) status which will execute the tearDown (would it?) and people will have mechanisms they are used to.

See my previous message.

Btw I'm not sure we ever write a dep solver which would be able to query this kind of tests (which execute on VM, Docker or localhost depending on params) before setUp, do you?

The way I imagine it is by implementing a matching correlation between what is provided by the test environment and what is required by the test (we discussed this at the inception of the project, years ago... you were not there, so the message is probably lost by now).

The test would declare its requirements using some sort of tags, while the test environment would provide a corresponding set. When loading tests for a particular environment, only the tests that have matching tags would be run.

Some of them would be standard tags with definition from Avocado, others would be set by the user.

A trivial example: (I'm ignoring work to be done in defining namespaces and syntax sugar)

Test Requirements: kernel-4.1+, physical_hw_environment, nvidia_gpu_configured_with_4_vgpus, my_scripts_installed

Test Environment 1: kernel-4.1 container_environment intel_gpu ...

Test Environment 2: kernel-4.2 physical_hw_environment intel_gpu nvidia_gpu_configured_with_4_vgpus, my_scripts_installed ...

Test Environment 3: my_scripts_installed intel_gpu ...

If one tries to run the test in all these environments, it'll be skipped in 1 and 3, because only Environment 2 provides matching tags for the requirements.

There's a lot of potential for syntax-sugar, namespaces, the tags could actually be python functions called on both sides, we can also introduce decorators, etc.

-- Ademar Reis Red Hat

^[:wq!

adereis commented 7 years ago

On Fri, Feb 24, 2017 at 05:59:16AM -0800, Amador Pahim wrote:

On Fri, Feb 24, 2017 at 2:28 PM, Ademar de Souza Reis Jr < notifications@github.com> wrote:

On Fri, Feb 24, 2017 at 01:27:01AM -0800, Lukáš Doktor wrote:

Dne 23.2.2017 v 19:01 Ademar de Souza Reis Jr napsal(a):

On Thu, Feb 23, 2017 at 09:04:08AM -0800, Cleber Rosa wrote:

My suggestion is to add a IGNORE status. It's meaning (definition) should be pretty clear: one should not depend on its outcome. Yet, some (or all of it) was executed. I see the effort to support this type of status as pretty small.

And I guess we would map IGNOREs to SKIPs in XUnit, which makes sense to me and will solve the problem originally reported. This is just playing with words, but if that makes you happy I'll support this. I don't like the name, though, how about CANCEL? Anyway it should map to SKIP as most frameworks support this while our status would be finer-grained.

What's the reason for you not to like ABORT? Please don't tell me it's just taste. :-)

My point is simple: the status should represent what happened with the test. One doesn't "cancel" its own execution, one aborts it.

  • The test being ran decided to abort its own execution
  • The test being ran decided to cancel its own execution
  • The test being ran decided to ignore its own execution (!)
  • The test being ran decided to skip its own execution (!)

also:

  • The test runner decided to abort the execution of a test
  • The test runner decided to cancel the execution of a test
  • The test runner decided to ignore the execution of a test (!)
  • The test runner decided to skip the execution of a test

Same thing for:

self.abort(): abort its own execution self.cancel(): cancel its own execution self.skip(): skip its own execution (!) self.ignore(): ignore its own execution (!)

Mapping ABORT to SKIP in Xunit is a bit of a stretch, but I would be OK with it as a compromise solution. It certainly is much much cleaner than using skip in setUp() or elsewhere.

It's hard for me to agree that an ABORT will be mapped to a SKIP. We have in Avocado the concept of 'good' and 'bad' status. A SKIP is considered a good status, which will not count to make the Avocado execution to exit with an exit_codes.AVOCADO_TESTS_FAIL in the final status. Aborting the test from inside the test should reflect in a 'bad' test status. Mapping it to SKIP means it will be internally considered a 'good' status and will make Avocado execution to finish with 0 (if nothing else bad happens). But as you said, it's a compromise solution.

Yes, it bothers me as well. Which is a good sign: having proper names and semantics forces us to think on the consequences.

Maybe CANCEL/CANCELLED as suggested by Lukas will sound better in this case? When I think of "canceling" something, I think of reverting the state to what it was before... it may be a better name indeed.

-- Ademar Reis Red Hat

^[:wq!

ldoktor commented 7 years ago

My point is simple: the status should represent what happened with the test. One doesn't "cancel" its own execution, one aborts it.

  • The test being ran decided to abort its own execution
  • The test being ran decided to cancel its own execution
  • The test being ran decided to ignore its own execution (!)
  • The test being ran decided to skip its own execution (!)

also:

  • The test runner decided to abort the execution of a test
  • The test runner decided to cancel the execution of a test
  • The test runner decided to ignore the execution of a test (!)
  • The test runner decided to skip the execution of a test

Same thing for:

self.abort(): abort its own execution self.cancel(): cancel its own execution self.skip(): skip its own execution (!) self.ignore(): ignore its own execution (!)

This is exactly what I did but my sentence was: I am a test, do I want myself to:

I'm not native, so my feeling will be probably different. Anyway I know I don't want to ignore myself (never). I don't know whether I can abort myself, but I feel like other people can abort me. I think I should be able to cancel myself when something goes wrong or eventually if I just don't want to run anymore. Out of those I like the most skip, because I do want to skip myself if I don't feel like I'm worth "living".

Anyway I'm influenced by other frameworks more than by the language itself, so my perception is different. If I were to decide, I'd create DEP_SKIP status and allow SKIP from everywhere. I know this doesn't work for you so I'd go with SKIP for deps and CANCEL or ABORT to skip with tearDown (with mapping to SKIP/PASS)

ldoktor commented 7 years ago

It's hard for me to agree that an ABORT will be mapped to a SKIP. We have in Avocado the concept of 'good' and 'bad' status. A SKIP is considered a good status, which will not count to make the Avocado execution to exit with an exit_codes.AVOCADO_TESTS_FAIL in the final status. Aborting the test from inside the test should reflect in a 'bad' test status. Mapping it to SKIP means it will be internally considered a 'good' status and will make Avocado execution to finish with 0 (if nothing else bad happens). But as you said, it's a compromise solution.

Take a look at the usage. It means that the test does not applies for this environment so you abort it. You can always decide it does applies but can't be executed and you can use ERROR for that.

ldoktor commented 7 years ago

A trivial example: (I'm ignoring work to be done in defining namespaces and syntax sugar)

Test Requirements: kernel-4.1+, physical_hw_environment, nvidia_gpu_configured_with_4_vgpus, my_scripts_installed

Test Environment 1: kernel-4.1 container_environment intel_gpu ...

Test Environment 2: kernel-4.2 physical_hw_environment intel_gpu nvidia_gpu_configured_with_4_vgpus, my_scripts_installed ...

Test Environment 3: my_scripts_installed intel_gpu ...

If one tries to run the test in all these environments, it'll be skipped in 1 and 3, because only Environment 2 provides matching tags for the requirements.

Yes, I can imagine this. But look again at this usecase:

  1. One class defines several methods where each methods has different dependencies. So the dep-specification needs to be per-test-method.
  2. The environment heavily depends on the params and to examine it you need to setup VM/Container or just examine your localhost. Other users might need to load some kernel modules, download things from remote servers, build some tools... This seems near impossible to me to cover via requirements and environment.

I think this will be useful in many cases. But for tests like this or other kinds of advanced tests the way of chicken out and saying you don't really want to be executed will look better.

Nonetheless I'd like to stop feeding the flamewar and let me summarize what we can probably agree on. At this point we don't plan to write a dep solver which would fit this situation. Cleber suggested new status and I think we all agree that would make sense so let's discuss the name of such status so we can proceed.

Jan I can't recommend using the internal exception, but the true is it'll probably be possible to replace it by sed when we introduce the self.cancel or similar. Don't say you wasn't warned if anything goes wrong (like when we stop executing the tearDown). If I were to do it quick and dirty way I'd define self.cancel which'd contain the internal exception, later even the self.tearDown to clean up and eventually when we introduce it I'd just remove it and the test code would stay unchanged (unless we use self.abort). Anyway it's dirty and it might lead to unpredictable failures.

apahim commented 7 years ago

It's hard for me to agree that an ABORT will be mapped to a SKIP. We have in Avocado the concept of 'good' and 'bad' status. A SKIP is considered a good status, which will not count to make the Avocado execution to exit with an exit_codes.AVOCADO_TESTS_FAIL in the final status. Aborting the test from inside the test should reflect in a 'bad' test status. Mapping it to SKIP means it will be internally considered a 'good' status and will make Avocado execution to finish with 0 (if nothing else bad happens). But as you said, it's a compromise solution.

Yes, it bothers me as well. Which is a good sign: having proper names and semantics forces us to think on the consequences.

Maybe CANCEL/CANCELLED as suggested by Lukas will sound better in this case? When I think of "canceling" something, I think of reverting the state to what it was before... it may be a better name indeed.

Yes, I agree that CANCEL has the meaning of reverting something. It is way easier for me to think in CANCEL as a 'good' status.

-- Ademar Reis Red Hat

^[:wq!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/avocado-framework/avocado/issues/1815#issuecomment-282301521, or mute the thread https://github.com/notifications/unsubscribe-auth/AEf8vRRQ8HqoOg_sqwvu2TDeu84jFn0oks5rfuZ_gaJpZM4MJ6Ze .

clebergnu commented 7 years ago

CANCEL also sounds fine to me. Looks like an agreement.

ldoktor commented 7 years ago

Add support to "CANCEL" test execution

apahim commented 7 years ago

Solved by PR #1857.

lmr commented 7 years ago

@apahim, one tidbit of information, decorators do have access to the object instances, they are passed as first argument of args:

def my_decorator(method):
    def wrapper(*args, **kwargs):
        args[0].do_the_thing()
        method(*args, **kwargs)
    return wrapper

class Example(object):

    @my_decorator
    def do_that_other_thing(self):
        print("Ho, let's go")

    def do_the_thing(self):
        print('Hey')

Example().do_that_other_thing()
lmr commented 7 years ago

OK, I should've been more precise. The arguments of the function being decorated are passed to decorators, and in non class methods of classes, that's usually a reference to the object, self.

apahim commented 7 years ago

@lmr yep, actually I expressed myself badly in that matter. The problem is that in the class body there's no self, so you cannot use self as an argument for the function that defines the decorator, like in:

...
class MyTest(avocado.Test):

    def setUp(self):
        self.myvar = True

    @avocado.skipIf(self.myvar)
    def test(self):
        pass
...

The code above is broken.

But as you said correclty, the decorator itself can access the class instance at runtime, using the first argument of the function being wrapped. And I even considered to create some solution to the use-case above, like having a decorator that also expects the first argument to be callable, so we could pass a lambda with the same attribute name as in self. Something like:

from avocado import Test
from avocado.core.exceptions import TestDecoratorSkip

def skipIf(cond, message=None):
    def decorator(function):
        def wrapper(self, *args, **kwargs):
            if callable(cond):
                if cond(self):
                    raise TestDecoratorSkip(message)
            elif cond:
                raise TestDecoratorSkip(message)
        return wrapper
    return decorator

class MyTest(Test):

    def setUp(self):
        self.is_cond_ok = True

    @skipIf(lambda x: x.is_cond_ok, 'Skipping')
    def test1(self):
        pass

    @skipIf(False,  'Skipping')
    def test2(self):
        pass

Code above would result in:

~$ avocado run ~/avocado/tests/test_decorator.py
JOB ID     : ef32b1b89ccbbf310889486aba08b305b0d2e1ff
JOB LOG    : /home/apahim/avocado/job-results/job-2017-03-13T12.04-ef32b1b/job.log
 (1/2) /home/apahim/avocado/tests/test_decorator.py:MyTest.test1: SKIP
 (2/2) /home/apahim/avocado/tests/test_decorator.py:MyTest.test2: PASS (0.02 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 1 | WARN 0 | INTERRUPT 0 | CANCEL 0
TESTS TIME : 0.03 s
JOB HTML   : /home/apahim/avocado/job-results/job-2017-03-13T12.04-ef32b1b/html/results.html

But I thought it would be an ugly solution and didn't even propose it.

brianjmurrell commented 5 years ago

Just to clarify though, setUp() is run before any @skip decorators are processed? At least that seems to be my experience with 52.1.

ldoktor commented 5 years ago

This card is about @skip decorator and not the self.skip method. In case the @skip decorator is used the test class is not executed. The only code from the test that is executed is the main body and test's __init__ to get the instance. When the instance is obtained and skip decorator detected the execution is interrupted without executing tearDown.

You can try this:

diff --git a/examples/tests/cancelonsetup.py b/examples/tests/cancelonsetup.py
index 5db6d7df..fc7443b0 100644
--- a/examples/tests/cancelonsetup.py
+++ b/examples/tests/cancelonsetup.py
@@ -3,6 +3,7 @@
 from avocado import Test, skip
 from avocado import main

+print("import")

 class CancelOnSetupTest(Test):

@@ -10,8 +11,12 @@ class CancelOnSetupTest(Test):
     Example test that cancels the current test, on the setUp phase.
     """

+    def __init__(self, *args, **kwargs):
+        super(CancelOnSetupTest, self).__init__(*args, **kwargs)
+        self.log.error("init")
+
     def setUp(self):
-        self.log.error("asdf")
+        self.log.error("setUp")
         self.cancel('This should end with CANCEL.')

     @skip("Skipped")
@@ -19,7 +24,10 @@ class CancelOnSetupTest(Test):
         """
         This won't get to be executed, given that setUp calls .cancel().
         """
-        pass
+        self.log.error("test")
+
+    def tearDown(self):
+        self.log.error("TearDown")

 if __name__ == "__main__":

From the output you can see:

$ avocado --show all run examples/tests/cancelonsetup.py 
stevedore.extension: found extension EntryPoint.parse('zip_archive = avocado.plugins.archive:ArchiveCLI')
stevedore.extension: found extension EntryPoint.parse('tap = avocado.plugins.tap:TAP')
stevedore.extension: found extension EntryPoint.parse('journal = avocado.plugins.journal:Journal')
stevedore.extension: found extension EntryPoint.parse('json = avocado.plugins.jsonresult:JSONCLI')
stevedore.extension: found extension EntryPoint.parse('wrapper = avocado.plugins.wrapper:Wrapper')
stevedore.extension: found extension EntryPoint.parse('gdb = avocado.plugins.gdb:GDB')
stevedore.extension: found extension EntryPoint.parse('replay = avocado.plugins.replay:Replay')
stevedore.extension: found extension EntryPoint.parse('xunit = avocado.plugins.xunit:XUnitCLI')
stevedore.extension: found extension EntryPoint.parse('envkeep = avocado.plugins.envkeep:EnvKeep')
stevedore.extension: found extension EntryPoint.parse('remote = avocado_runner_remote:RemoteCLI')
stevedore.extension: found extension EntryPoint.parse('robot = avocado_robot:RobotCLI')
stevedore.extension: found extension EntryPoint.parse('docker = avocado_runner_docker:DockerCLI')
stevedore.extension: found extension EntryPoint.parse('resultsdb = avocado_resultsdb:ResultsdbCLI')
stevedore.extension: found extension EntryPoint.parse('yaml_to_mux = avocado_varianter_yaml_to_mux:YamlToMuxCLI')
stevedore.extension: found extension EntryPoint.parse('html = avocado_result_html:HTML')
stevedore.extension: found extension EntryPoint.parse('vm = avocado_runner_vm:VMCLI')
stevedore.extension: found extension EntryPoint.parse('exec-path = avocado.plugins.exec_path:ExecPath')
stevedore.extension: found extension EntryPoint.parse('run = avocado.plugins.run:Run')
stevedore.extension: found extension EntryPoint.parse('sysinfo = avocado.plugins.sysinfo:SysInfo')
stevedore.extension: found extension EntryPoint.parse('list = avocado.plugins.list:List')
stevedore.extension: found extension EntryPoint.parse('multiplex = avocado.plugins.multiplex:Multiplex')
stevedore.extension: found extension EntryPoint.parse('plugins = avocado.plugins.plugins:Plugins')
stevedore.extension: found extension EntryPoint.parse('diff = avocado.plugins.diff:Diff')
stevedore.extension: found extension EntryPoint.parse('variants = avocado.plugins.variants:Variants')
stevedore.extension: found extension EntryPoint.parse('config = avocado.plugins.config:Config')
stevedore.extension: found extension EntryPoint.parse('distro = avocado.plugins.distro:Distro')
stevedore.extension: found extension EntryPoint.parse('yaml_to_mux = avocado_varianter_yaml_to_mux:YamlToMux')
stevedore.extension: found extension EntryPoint.parse('journal = avocado.plugins.journal:JournalResult')
stevedore.extension: found extension EntryPoint.parse('tap = avocado.plugins.tap:TAPResult')
stevedore.extension: found extension EntryPoint.parse('human = avocado.plugins.human:Human')
stevedore.extension: found extension EntryPoint.parse('resultsdb = avocado_resultsdb:ResultsdbResult')
stevedore.extension: found extension EntryPoint.parse('teststmpdir = avocado.plugins.teststmpdir:TestsTmpDir')
stevedore.extension: found extension EntryPoint.parse('jobscripts = avocado.plugins.jobscripts:JobScripts')
stevedore.extension: found extension EntryPoint.parse('human = avocado.plugins.human:HumanJob')
JOB ID     : 4add5c05fff53846ab9ffb0a664fb39d4493067e
JOB LOG    : /home/medic/avocado/job-results/job-2018-11-12T13.36-4add5c0/job.log
stevedore.extension: found extension EntryPoint.parse('yaml_to_mux = avocado_varianter_yaml_to_mux:YamlToMux')
avocado.sysinfo: File /etc/avocado/sysinfo/commands does not exist.
avocado.sysinfo: File /etc/avocado/sysinfo/files does not exist.
avocado.sysinfo: File /etc/avocado/sysinfo/profilers does not exist.
avocado.sysinfo: System log file not found (looked for ['/var/log/messages', '/var/log/syslog', '/var/log/system.log'])
avocado.test: Command line: /home/medic/.local/bin/avocado --show all run examples/tests/cancelonsetup.py
avocado.test: 
avocado.test: Avocado version: 52.0
avocado.test: 
avocado.test: Config files read (in order):
avocado.test: /home/medic/Work/Projekty/avocado/avocado/etc/avocado/avocado.conf
avocado.test: /home/medic/Work/Projekty/avocado/avocado/etc/avocado/conf.d/resultsdb.conf
avocado.test: /home/medic/Work/Projekty/avocado/avocado/etc/avocado/conf.d/jobscripts.conf
avocado.test: /home/medic/Work/Projekty/avocado/avocado/etc/avocado/conf.d/gdb.conf
avocado.test: /home/medic/.config/avocado/avocado.conf
avocado.test: 
avocado.test: Avocado config:
avocado.test: Section.Key                              Value
avocado.test: datadir.paths.base_dir                   /var/lib/avocado
avocado.test: datadir.paths.test_dir                   /usr/share/avocado/tests
avocado.test: datadir.paths.data_dir                   /var/lib/avocado/data
avocado.test: datadir.paths.logs_dir                   ~/avocado/job-results
avocado.test: sysinfo.collect.enabled                  True
avocado.test: sysinfo.collect.commands_timeout         -1
avocado.test: sysinfo.collect.installed_packages       False
avocado.test: sysinfo.collect.profiler                 False
avocado.test: sysinfo.collect.locale                   C
avocado.test: sysinfo.collectibles.commands            /etc/avocado/sysinfo/commands
avocado.test: sysinfo.collectibles.files               /etc/avocado/sysinfo/files
avocado.test: sysinfo.collectibles.profilers           /etc/avocado/sysinfo/profilers
avocado.test: runner.output.colored                    True
avocado.test: runner.output.utf8                       
avocado.test: remoter.behavior.reject_unknown_hosts    False
avocado.test: remoter.behavior.disable_known_hosts     False
avocado.test: job.output.loglevel                      debug
avocado.test: restclient.connection.hostname           localhost
avocado.test: restclient.connection.port               9405
avocado.test: restclient.connection.username           
avocado.test: restclient.connection.password           
avocado.test: plugins.disable                          []
avocado.test: plugins.skip_broken_plugin_notification  []
avocado.test: plugins.loaders                          ['file', '@DEFAULT']
avocado.test: plugins.jobscripts.pre                   /etc/avocado/scripts/job/pre.d/
avocado.test: plugins.jobscripts.post                  /etc/avocado/scripts/job/post.d/
avocado.test: plugins.jobscripts.warn_non_existing_dir False
avocado.test: plugins.jobscripts.warn_non_zero_status  True
avocado.test: gdb.paths.gdb                            /usr/bin/gdb
avocado.test: gdb.paths.gdbserver                      /usr/bin/gdbserver
avocado.test: 
avocado.test: Avocado Data Directories:
avocado.test: 
avocado.test: base     /home/medic/avocado
avocado.test: tests    /home/medic/Work/Projekty/avocado/avocado/examples/tests
avocado.test: data     /home/medic/avocado/data
avocado.test: logs     /home/medic/avocado/job-results/job-2018-11-12T13.36-4add5c0
avocado.test: 
avocado.test: No variants available, using defaults only
avocado.test: 
avocado.test: Variant :    /
avocado.test: Temporary dir: /var/tmp/avocado_xAPi_g
avocado.test: 
avocado.test: Job ID: 4add5c05fff53846ab9ffb0a664fb39d4493067e
avocado.test: 
avocado.test: import
paramiko: import

here the test module was imported

avocado.sysinfo: File /etc/avocado/sysinfo/commands does not exist.
avocado.sysinfo: File /etc/avocado/sysinfo/files does not exist.
avocado.sysinfo: File /etc/avocado/sysinfo/profilers does not exist.
avocado.sysinfo: System log file not found (looked for ['/var/log/messages', '/var/log/syslog', '/var/log/system.log'])
avocado.test: PARAMS (key=timeout, path=*, default=None) => None
avocado.test: START 1-examples/tests/cancelonsetup.py:CancelOnSetupTest.test_wont_be_executed
avocado.test: init

here is the test __init__

 (1/1) examples/tests/cancelonsetup.py:CancelOnSetupTest.test_wont_be_executed:  avocado.test: 
avocado.test: Reproduced traceback from: /home/medic/Work/Projekty/avocado/avocado/avocado/core/test.py:590
avocado.test: Traceback (most recent call last):
avocado.test:   File "/home/medic/Work/Projekty/avocado/avocado/avocado/core/decorators.py", line 66, in wrapper
avocado.test:     raise core_exceptions.TestSkipError(message)
avocado.test: TestSkipError: Skipped
avocado.test: 
avocado.test: SKIP 1-examples/tests/cancelonsetup.py:CancelOnSetupTest.test_wont_be_executed -> TestSkipError: Skipped
avocado.test: 
SKIP
avocado.test: Test results available in /home/medic/avocado/job-results/job-2018-11-12T13.36-4add5c0
RESULTS    : PASS 0 | ERROR 0 | FAIL 0 | SKIP 1 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME   : 0.16 s
stevedore.extension: found extension EntryPoint.parse('zip_archive = avocado.plugins.archive:Archive')
stevedore.extension: found extension EntryPoint.parse('json = avocado.plugins.jsonresult:JSONResult')
stevedore.extension: found extension EntryPoint.parse('xunit = avocado.plugins.xunit:XUnitResult')
stevedore.extension: found extension EntryPoint.parse('html = avocado_result_html:HTMLResult')
JOB HTML   : /home/medic/avocado/job-results/job-2018-11-12T13.36-4add5c0/results.html

And here you can see no setUp, tearDown or test code being executed. Using 0322eaf73fd1412fa5ee20af8a6f05a4388203c5