lmaurits / BEASTling

A linguistics-focussed command line tool for generating BEAST XML files.
BSD 2-Clause "Simplified" License
20 stars 6 forks source link

Refactor beastrun tests? #238

Closed lmaurits closed 5 years ago

lmaurits commented 5 years ago

Right now the beastrun tests are implemented as one massive test iterating over configurations. For each configuration the test updates (or used to before the pytest migration) some name or description or whatever attribute of itself in an attempt to make it possible for things like Travis to report where the failure occurred. For whatever reason, this never worked (Travis always reported some other config failing than the actual problem). I often made the obvious local hack of having it print out it's config before doing the test, but somehow that never seemed to point me at the right place either.

Can we somehow refactor that test in such a way that we always know precisely which configuration is failed, and that it's very easy to rerun exactly that test and only that test to confirm a fix rather than waiting for the whole thing to run? Not being able to do these things is a huge impediment to reliable development work.

Can we make each logical test inside that one real test into a real test in its own right without having to write a lot of repetitive boilerplate?

xrotwang commented 5 years ago

This is already the case now. Using pytest.mark.parametrize will essentially turn each call of the actual test function into a separate test, and report the arguments passed into the test function:

============================================== FAILURES ==============================================
_________________________________ test_beastrun[configs100-<lambda>] _________________________________

configs = ('admin', 'covarion_multistate', 'ancestral_state_reconstruction', 'ascertainment_false')
assertion = <function <lambda> at 0x7fce8fc44f28>
config_factory = <function config_factory.<locals>.make_cfg at 0x7fce8fc43f28>
tmppath = PosixPath('/tmp/pytest-of-forkel@shh.mpg.de/pytest-64/test_beastrun_configs100__lamb0')

...
xrotwang commented 5 years ago

What I did locally - but can push as well - is copying the XML to cwd. This could be cleaned up after successfully passing the test - so whatever is leftover is the XML of the last failure.

lmaurits commented 5 years ago

Ah, okay, I do see that now! Easily missed amongst the screens of screens output, including long code listings, that pytest gives me right now. But I guess I can put things in setup.cfg to control how verbose it is...

That XML in cwd thing sounds pretty handy!

xrotwang commented 5 years ago

What I also did was not calling validate_ids right at the end of build_xml, but "manually", in beastrun_tests, after writing and copying the XML.

xrotwang commented 5 years ago

can be closed, I guess?

lmaurits commented 5 years ago

Yep, thanks.