Closed vsoch closed 3 years ago
In fact this is a requirement for the JOSS review to have automated or manual test instructions: https://github.com/openjournals/joss-reviews/issues/3295
Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
this regards the devel3 branch
we will tackle this in the documentation since unit tests exist (we will make sure they are part of the release, and coverage of 79% at the moment, our Jenkins CI pipeline executes them
additionally there are 2 shell scripts
1 utils/udocker_test.sh
2 utils/udocker_test-run.sh
where almost all the commands of udocker are tested from the perspective of a user, the second script test udocker run with several images and all execution modes except singularity (S1)
The have unit tests, integration tests, security tests with bandit and code style checking. We use Jenkins pipelines for our SQA. They are under utils as Mario explained. The .sqa directory has the configurations for the Jenkins pipeline as code. We tests the udocker functionalities we don't provide tests for specific applications or environments such as HPC.
@jorge-lip great! Are there instructions anywhere for how a developer or interested user like myself could run the tests?
just did a commit with information about how to run tests, it's in section 9 of the installation manual https://github.com/indigo-dc/udocker/blob/master/docs/installation_manual.md
This is great! I had a few suggestions here -> https://github.com/indigo-dc/udocker/pull/333
PR approved and merged
I'm good with thew new additions - @mviereck when you've had a chance to take a look let me know, and if you are also good we can close the issue.
I've tried to run the test as described:
virtualenv -p python3 ud3
source ud3/bin/activate
git clone https://github.com/indigo-dc/udocker.git
cd udocker
pip install -r requirements-dev.txt
I got a lot of errors at the pip install
step:
I could fix this installing libcurl4-gnutls-dev
(debian) and several files are downloaded.
But than I ran into the next error:
@mviereck I tried the tests and they did work - do you have python-dev headers available? (I'm seeing a reference to Python.h). Although that's a bit of a strange dependency, probably one of the libraries needs it to build.
@vsoch I've installed pyton3-dev
and get a new error, now about missing gnutls.h
.
I could fix this with installing libgnutls28-dev
.
Now this step completes.
The step
nosetests -v --with-coverage --cover-package=udocker tests/unit
shows several 'ok' and some errors. Because I don't know what is checked exactly (syntax and code style?), I can't assess its meaning.
The results of the further tests also tell me nothing, but seem to be ok:
pylint --rcfile=pylintrc --disable=R,C udocker
bandit -r udocker -f html -o bandit.html
Currently I run ./udocker_test.sh
, it seems to take some time.
Hmm, it probably would make sense to list the system dependencies somewhere or provide a container. I must have lucked out that I had them.
"Nose’s tagline is “nose extends unittest to make testing easier”. It’s is a fairly well known python unit test framework, and can run doctests, unittests, and “no boilerplate” tests." https://pythontesting.net/framework/nose/nose-introduction/
"Pylint is a Python static code analysis tool which looks for programming errors, helps enforcing a coding standard, sniffs for code smells and offers simple refactoring suggestions." https://pypi.org/project/pylint/
"Bandit is a tool designed to find common security issues in Python code. To do this Bandit processes each file, builds an AST from it, and runs appropriate plugins against the AST nodes. Once Bandit has finished scanning all the files it generates a report." https://pypi.org/project/bandit/
I hope this clarifies what tests are being preformed
./udocker_test.sh
and ./udocker_test-run.sh
it may take more or less time, it depends on the one hand on your internet bandwidth since both download docker images from dockerhub, and depend on you laptop or desktop performance, since they execute the CLI including running containers, import and exporting the containers etc., in my desktop
./udocker_test.sh
took about 46 sec.
./udocker_test-run.sh
took about 1min 33 sec
./udocker_test.sh and ./udocker_test-run.sh it may take more or less time, it depends on the one hand on your internet bandwidth since both download docker images from dockerhub, I did not measure the time, but after about half an hour I put the laptop aside.
Yes, the internet bandwith was the bottleneck here.
udocker_test.sh
showed one minor error:
tar img file exists https://download.ncg.ingrid.pt/webdav/udocker_test/centos7.tar
------------------------------------------------------------>
Error: failed to extract container: centos7.tar
Error: load failed
[FAIL] T028: udocker load -i centos7.tar
It makes sense to collect all errors and show them altogether when the script has finished. I found this one scrolling through the output.
I hope this clarifies what tests are being preformed
Thank you! Yes that helps.
So nosetests
, pylint
and bandit
run code checks, while udocker-test.sh
and udocker_test-run.sh
run tests on user level. This is basically said in the instructions, could however be a bit more pointed out.
The tests required by JOSS are fulfilled with the high level tests, the code checks are a nice bonus.
Hmm, it probably would make sense to list the system dependencies somewhere or provide a container.
A similar/additional thought: udocker-test.sh
and udocker_test-run.sh
must not have an already existing ~/.udocker
. It would be good if the tests are possible nonetheless, either in a container or using another directory than ~/.udocker
for the tests, maybe set with one of the environment variables to something like /tmp/udocker-test
. That would allow to run the test scripts at any time.
Many thanks for the suggestions we will address those in a future release.
I've run udocker_test-run.sh
and got some errors.
Likely caused by damaged image downloads caused by limited bandwith.
The same suggestion as above: Please collect the errors and print a summary at the end.
Looking only at the final output would currrently suggest that everything went well.
Even echo $?
printed 0
although errors occured.
The two test scripts have been improved to display a summary of the failed tests upon exit, when failures occur. The exit status also reflects the errors.
The two test scripts have been improved to display a summary of the failed tests upon exit, when failures occur. The exit status also reflects the errors.
Great!
Just did a test run with udocker-test.sh
. It shows some curl timeout errors, but overall success. Is that correct?
Yes it is alerting of some TCP connect timeouts which means that your network must have some problems. There is recovery for these errors, the operations are retried and it completes ok.
There is recovery for these errors, the operations are retried and it completes ok.
Great! docker and podman just drop hours of download on such failures.
So far I am good with the tests and we can close here.
If it helps you, I would run nosetests -v --with-coverage --cover-package=udocker tests/unit
again to show you the error messages it produced here. I cannot assess if they are harmless or important.
Remaining possible improvements, but I won't insist:
~/.udocker
.Thanks, Yes please, send us those error messages. Ok we will make the tests differentiation more clear in the documentation Regarding the ~/.udocker the best approach is for the user himself to rename .udocker to something else, run the tests and then move back the directory. We can add that to the documentation as well.
those are error messages from the code itself when the unit test passes though the branch containing that message and not a real error from the test. nonetheless I have set the message level to 0 so as not to show any messages from the code in the execution of the unit tests, that commit will be done soon
solved in latest commit to master, only affected the unit tests
Regarding the ~/.udocker the best approach is for the user himself to rename .udocker to something else, run the tests and then move back the directory. We can add that to the documentation as well.
This would be a good addition. I tried to set UDOCKER_DIR
to a different folder, but that did not help.
those are error messages from the code itself when the unit test passes though the branch containing that message and not a real error from the test.
ok, I understand.
nonetheless I have set the message level to 0 so as not to show any messages from the code in the execution of the unit tests, that commit will be done soon
Set in the way you think it's best. I did not want to enforce a change here, just tried to understand.
Thank you for all this, I am good with the tests.
as per the last comment we will close this issue
It looks like tests aren't included in the release, and at least for HPC, installers would want to be able to run tests before moving forward. Have you considered adding them?