Open xZise opened 7 years ago
To be honest I don't really know how your greenlet
implementation is going to work so I can't really say what needs to be changed.
But mainly the advantage of using pytest
is that it provides more information and doesn't stop other tests. When I was running my tests regarding #29 yesterday it would only run until the first assertion happened. And even then it wouldn't tell me what went wrong.
And maybe there is actually a way to say where the problematic assertion happened.
Okay I have found a way to actually show where the assertion happened, unfortunately the Python 2 code cannot be parsed in Python 3 so I need to find a way round that.
Okay I think I've resolved the issue that it didn't show where the assertion happened in both Python versions. Additionally I do think that at some point your repository should use a testing suite. At the moment if you just use assertions it does tell you only that they failed but not why (if there was a comparison what the separate values for example).
So I'm wondering if I should separate the second commit which adds the threading stuff? Because a patch like #29 would benefit from that as it doesn't use threads and thus could be written to be used by pytest
. The only downside (I think) is, when with the introduction of greenlets
it becomes impossible to use pytest
, that the patch might need to be reverted if there is more suitable library.
But I have to say it is cumbersome to run tests because you only get to know that it failed but not what happened (unless you manually print it out). If you are fine with separating the commits please tell me which commit this pull request should contain (aka the first or second). There is also the gnome_music
test. It seems to me that this could be easily converted but you can say if I should it include in either of the commits or use a separate commit (but then maybe included in either of the pull requests). One feature of pytest
(and others) is to skip unsuitable tests. For example I don't have GNOME music installed so it should skip the test on my machine. At the moment it'd throw an exception and be classified as a failure.
I've never had any problems with checking why the test has failed. There is a traceback, and the line with assert
is included. Of course assert(False)
s aren't too readable, but it looks like pytest.raises
is usable even when without running the tests with the pytest command:
>>> import pytest
>>> with pytest.raises(AttributeError):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/usr/lib/python3.5/site-packages/_pytest/python.py", line 1217, in __exit__
pytest.fail(self.message)
File "/usr/lib/python3.5/site-packages/_pytest/runner.py", line 537, in fail
raise Failed(msg=msg, pytrace=pytrace)
Failed: DID NOT RAISE <class 'AttributeError'>
>>> with pytest.raises(AttributeError):
... raise AttributeError()
...
>>>
And BTW, the GNOME Music test is not really useful for testing pydbus. I've made it when I was porting GNOME Music to pydbus (however, they've chosen to go straight for GDBus). It isn't 100% automated, depends on external circumstances (for example the user has to have some songs in his GNOME Music collection), it's slow, affects the user, and it's impossible to use on Travis - while I'm almost exclusively using Travis to run the test collection.
It's there, because it somewhat works, but it's not a good idea to run it automatically.
Here is an example what is shown when the assertion failed. To force it I used this branch and modified pydbus/tests/identifier.py
:
> assert filter_identifier(input) == output
E assert 'abc' == 'abcd'
E - abc
E + abcd
E ? +
pydbus/tests/identifier.py:13: AssertionError
If I don't use pytest I get the following:
$ python -c "import pydbus.tests.identifier as I; I.test_filter_identifier('abc', 'abcd')"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "pydbus/tests/identifier.py", line 13, in test_filter_identifier
assert filter_identifier(input) == output
AssertionError
Apart from that it does only stops the test function itself after an assertion failed. Other test functions will still be run. The other tests may fail as well, but if they don't depend on each other (and they shouldn't, otherwise they should be in the same test function) it may tell more about the actual issue. If you look in the tests of #29 you'll see that you can basically run each test independently.
This rewrites the existing tests to be used with
pytest
which produces a more helpful output when an assertion failed. It also allows to have multiple separate tests in one file but separated in functions.It also adds some tests when using multiple interfaces in one class and it checks if the results are actually valid instead of just printing them out.