Open kytrinyx opened 4 years ago
I did a search, and it looks like there has been some discussion about this previously:
@yawpitch said in https://github.com/exercism/python/issues/1853#issuecomment-578778610:
We didn't do that for a good reason: we simply cannot rely on the canonical-data.json files having taken ordering into account in a manner that makes any sense to this sort of progressive TDD approach. Many tests have been added with no discernible concern for that sort of ordering, and I simply don't see us being able to apply such an ordering to that data, especially since that repo is still locked.
My quite take on this is that, while that's true and there are lots of places that the tests may not have been ordered correctly, there are also a significant amount of places where they have been deliberately ordered and where that order greatly improves the learning experience.
So there is value to respecting the order, although it will not always be beneficial, and I can't see the downside of not respecting the order.
If that is accurate, then the only negative of implementing some solution that respects the order is that (as outlined in #1853 and other places) there's not a particularly clean solution. However, I would argue that an ugly solution is better than not having ordering. So, pending someone willing to do the work, I think this is a change we should make.
Piping in to explain: the technical reason we never did this is because unittest.TestLoader.sortTestMethodsUsing = None
does not imply line order sort, it merely removes entirely any current sort behavior (which is alpha by default)... so instead what you get is an undocumented and undefined order, though in practice in the current source code that’s the implicit sort order returned by calling dir(YourTestCaseSubclass)
which, annoyingly, is alpha sort.
To reliably get line order you need to set the sortTestMethodsUsing
class attribute to a function that looks up the line number in the source code of the original file via runtime introspection and compares based on that... such a function is non-trivial, involves importing at least the inspect
module, and would need to be added to the template for every generated test file. It’s also fairly brittle if inheritance is used to cobble together unittest.TestCase instances, though I can’t think of an example of that in our code base.
All of that, and the fact that it just generally seems a bad idea to imply to students that they should expect a fully curated sort order in the real world of Python development, has weighed against this change.
Now, yes, another simpler way would be to add to the templates some test name generator logic that would introduce a sorting number (so each test method would be generated as “test_0001_some_aspect”, “test_0002_some_other_aspect”, etc), but we’d have to ensure zero padding for unknown numbers of tests, and we’ve already got a problem with overlong test names.
@yawpitch Thanks for explaining :)
In #1853 you mused about pytest saying:
Thus if we give a hard requirement on pytest as the only "blessed" test runner then wouldn't we be better off simply re-writing all the templates to generate pytest-style tests anyway, and we'd get the definition order for free in the process?
Is there a (good enough) reason not to use pytest as the "blessed" runner? I believe many other tracks do have a specific test runner that they advocate (the JS family as an example). It sounds like this would simplify the situation and maybe give us a cleaner solution? But I don't know enough about Python to understand the tradeoffs.
Is there a (good enough) reason not to use pytest as the "blessed" runner?
Reasons not to restrict to single test runner (I don't know if any of them are "good enough"):
pytest
, currently the test code only requires a supported Python installation; no other dependencies required.
pytest
at all (I am not in favor of moving in this direction)pytest
runner, and just wants to practice with that, they are able to do so.These exercises are supposed to teach the basics, so I think it makes more sense to have just unittest usage explained to avoid the issue of people having to figure out how to install pytest. I don’t really see the benefit of pytest over unittest here, other than slightly prettier output. If someone is already familiar with another runner, I would think they’d know how to use it already regardless.
This issue has been automatically marked as abandoned
because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I am going to re-open this issue. In light of V3 changes that now depend on PyTest
and the test runner interface for V3 concept exercises
having a task number and implementation tests (runner interface specs), it no longer makes sense to argue that we are "only" using unittest
.
We now have dependancies on PyTest
, and PyTest
is at the heart of the runner that will be used on the V3 Website to run student code -- so we should instruct students to also install the same test runner and configuration on their own machines, so that they can use the same toolset prior to uploading.
PyTest
is very popular (I'd almost call it a default at this point) and is used for most Python projects (outside of the Python language core dev team which uses unittest
) -- nose has been in maintenance mode for a while, and no one has stepped forward to continue development. nose2 is also largely in a holding pattern. There is ptr - but it is pretty new. The vast majority of the instruction or tutorials a student might encounter would point them to PyTest
, or PyTest
+ plugins. See this and this for some examples.
Not going to go into ordering atm, but pytest-order and pytest-ordering as plugins to PyTest look interesting as options.
I tried out a couple of exercises on the Python track today, and noticed with the Raindrops exercise in particular that the tests were requiring me to implement the entire algorithm on the second failure.
I'm working in Python 3.8.5.
I run the tests with the following command:
With the unchanged stub file in place, I get the following failure:
This is a reasonable "first" failure, since it's a single number that should be turned into a string. If I make that pass with a hard-coded
return "8"
, then this is the next failure I get:Looking at the tests, the order that they are defined in is quite nice, in terms of pushing the implementation forward step by step.
I tried adding the following declaration after the
import unittest
statement, to see if this would cause the tests to be run in the order that they are defined:This did not change the order that the tests are run in.
If I sort the tests alphabetically by name, it appears to correspond to the order that I'm seeing when doing the exercise:
While under normal circumstances I would want tests to be run in random order, in Exercism exercises I think that it would make more sense to run the tests in the order that they are defined.
Is there a declaration we can make in order to accomplish this? If not, then I would propose that we rename tests by adding a numeric prefix to the test name (e.g.