Closed cbrnr closed 9 months ago
Seems like the envs are working - but of course the first test in the minimal env has already failed because there is no sklearn
:-).
Actually, only two tests seem to be affected: test_var_sklearn.py
and test_xvschema.py
. How do we proceed @mbillingr? Decorate all tests that require sklearn
?
Do we have to decorate each function separately or can we decorate the whole class?
No idea - I guess decorators only work with functions?
You can also decorate classes (at least in Python 3, don't know about 2.7).
This would be nice, because in principle we could skip the whole Python file in some cases...
Can you check if the decorators from my last commit could work? How would you apply them to classes?
Before this whole decorator thing gets out of control - since we wrap all our tests in classes derived from unittest.TestCase
anyway, can't we just require the import
in the setUp
method? Then maybe there's no need to create decorators at all.
There is @unittest.skipIf
- maybe we can use this decorator for our needs?
Before this whole decorator thing gets out of control - since we wrap all our tests in classes derived from unittest.TestCase anyway, can't we just require the import in the setUp method? Then maybe there's no need to create decorators at all.
Which means we have to repeat these boilerplate checks. If not too many tests are affected this would be OK.
There is @unittest.skipIf - maybe we can use this decorator for our needs?
Maybe. We'd still have to pass a condition.
The solution in my last commit works. However, the following tests fail because they rely on all backends:
scot.tests.test_ooapi.Test_MVARICA.testFunctionality
scot.tests.test_ooapi.Test_MVARICA.testModelIdentification
scot.tests.test_ooapi.Test_MVARICA.test_plotting
scot.tests.test_parallel.TestFunctions.test_output
scot.tests.test_plainica.TestICA.testModelIdentification
scot.tests.test_plotting
scot.tests.test_varica.TestMVARICA.testModelIdentification
And probably even more.
This shows that our optional dependencies are heavily worked into the tests. It will probably be quite painful to tease these tests apart. The alternative is to make sklearn and matplotlib required deps.
I would rather solve this problem by changing the backends so that they do not register themselves with the manager if their requirements are not met.
e.g. backend_sklearn.py (line 96+)
try:
import sklearn
backend.register('sklearn', generate)
except ImportError:
pass
Very good - even cleaner with an else
branch. What else?
Nice! I did't know you coud use else
with try
/except
.
I wonder
libgfortran
again :angry:run_tests.sh
line 14. These lines should take the possibility into account that the libraries are not available...Re 2, I've fixed the broken run_tests.sh
script. What do we do about 1? It seems like Anaconda is not really interested in keeping specific old package versions around - in our 2.7 env, it automatically updates
scipy
from 0.13.3 to 0.17.0sklearn
from 0.15.0 to 0.15.2numpy
from 1.8.2 to 1.9.3I think the best thing to do would be to just use whatever package versions the current Anaconda distribution ships (and completely ignore non-MKL because the Anaconda default is MKL). To test the oldest supported packages, we should create a non-Anaconda-based Python env (e.g. installed via Ubuntu packages).
I.e., I suggest the following testing envs:
apt
/pip
apt
/pip
scot.tests.test_ooapi.TestMVARICA.test_plotting
?joblib
or sklearn
is available. sklearn
is an optional dependency, and we don't even mention joblib
as an optional requirement. How do we deal with this (and the related tests in scot.tests.test_parallel.TestFunctions
)?@mbillingr do you know why the tests now fail after I've moved the imports into setUp
?
do you know why the tests now fail after I've moved the imports into setUp?
Because they are imported locally.
Well, OK.. they are just imported but the variables that refer to the modules are local to the setUp
function. For solutions see StackOverflow.
Hm. I thought they were known to the object. What do you think is the best solution here? Make plt
global?
In general, I have the feeling that mixing core tests with optional tests is rather ugly. Can't we do something nicer, such as completely separate core tests from optional tests?
What do you think is the best solution here? Make plt global?
Global would work, but that is kinda ugly. self.plt
could work too, but may require many changes.
It's also possible to catch import errors and skip certain tests if plt
does not exist.
Can't we do something nicer, such as completely separate core tests from optional tests?
Sure. We even have the option to separate them at different levels:
Hm. With option 2, we wouldn't have to change the file structure, and we could try/except import errors to see if the tests in the optional class should be run. Shall we go for it?
Hm. With option 2, we wouldn't have to change the file structure, and we could try/except import errors to see if the tests in the optional class should be run. Shall we go for it?
:+1:
Closing, if we ever need this, it is probably easier to start from scratch.
Fixes #166