Closed chrisjsewell closed 6 years ago
There's a really strange thing happening in the submit_test
(https://travis-ci.org/aiidateam/aiida-plugin-cutter/jobs/421525101).
When I create the DiffParameter
it is clearly the correct class:
parameters = DiffParameters({'ignore-case': True})
print("input DiffParameters class: {1}, {0}, {2}".format(parameters, parameters.__class__, '{{cookiecutter.entry_point_prefix}}'))
in the pytest output:
input DiffParameters class: <class 'aiida_diff.data.DiffParameters'>, uuid: 7636fe1e-9ff7-46e3-aec0-8b21d7abb136 (unstored), diff
Yet pytest records the inputdict
parsed to _prepare_for_submission
as:
inputdict = {'code': <Code: Remote code 'diff' on localhost-test, pk: 1, uuid: ff1c35da-9994-49f1-97c9-b6777e3847f7>, 'file1': <Si...02a-4e9e-95b1-272d3c30b169 (pk: 2)>, 'parameters': <ParameterData: uuid: 7636fe1e-9ff7-46e3-aec0-8b21d7abb136 (pk: 4)>}
This works on my repo. The only change I can think of, is that I moved the DiffParameter
declaration out of __init__.py
and into its own module?
@chrisjsewell Thanks a lot for your work, much appreciated! I'm very busy at the moment, so this might linger here for a bit longer as I need to have a proper look (+ probably need to discuss in the next aiida meeting whether we want to support both unittest and pytest or just pytest. as long as the number of tests isn't too large, maintaining both would be doable)
As I see that you've continued to work on it, please let me know once you're done.
@ltalirz Yeh no problem, I just wanted to get all the pytest code for aiida-diff documented before I remove it from my plugin repo. Really weird about the current test fail. I tried creating a local conda environment and installing it their. Initially I reproduced the test failure. But then I was playing around with the database, and now I can't reproduce the failure, even in a fresh conda environment. When I have time, I'll remake and test this branch more methodically.
@ltalirz fixed it. So it turns out that installing aiida-diff
in development mode (pip install -e
) was breaking the pytests on travis (locally the tests seem to work fine in this mode). Was there a reason for installing this way?
Anyway I'm now done with this PR
Was there a reason for installing this way?
Well, in another project, I ran into problems with the standard pip install
because it included compiled extensions. The extensions were compiled for the pip installed system-wide version of the package, but travis starts with the cwd in the repository folder, so import <package>
will import the folder from the repo (which did not contain the built extensions).
I suspect that the same is still happening here (i.e. after pip install aiida-diff
when you import aiida_diff
inside manage.py
, you actually use the aiida_diff
from the repository folder, not the one you have installed.
All in all, I am puzzled as to why pip install -e
should cause the tests to fail.
I believe reentry scan
was the problem, see #29
Ah yes that did work
Don't ask me why it worked with calc.submit
without the scan... it's also possible to issue the scan from python, perhaps that happens inside submit but not in test_submit.
Anyhow: An AiiDA rule of thumb is that reentry scan
is never wrong and sometimes needed ;-)
calc.submit
was still failing, just on the scheduler side, when the call to _prepare_for_submission
is made.
For completeness, it would probably be a good idea to add testing coverage (and link to coveralls)
Done :)
This is what the pytest implementation of the tests would look like:
get_computer
andget_code
functions, such that the created temporary work directory will be properly cleaned after testingconftest.py
creates fixtures which can be utilised by all tests within the same folder or subfoldercalc.submit_test
instead ofcalc.submit
to correctly test that the submission is successfultest_process
for executing the calculation and testing it returns the correct returncode and outputs the expected files.verdi run
, but will also be tested