EnergySystemsModellingLab / MUSE_OS

Welcome to the MUSE-OS repository
https://muse-os.readthedocs.io/en/latest/
GNU General Public License v3.0
22 stars 9 forks source link

Tests fail when running the full testing suite [BUG] #261

Closed tsmbland closed 2 months ago

tsmbland commented 5 months ago

Describe the bug

The developer documentation instructs to use python -m pytest to run the tests. However, I've found that the tests fail when using this command (more details below). I note that this command isn't actually run as part of the CI, which first runs the tests excluding the notebook tests and the regression tests, before running the regression tests separately. Splitting the tests up in this way also works fine locally. Nevertheless, the tests should ideally pass when running the full suite. Otherwise, the developer instructions should change to clarify which tests should be run locally and which shouldn't.

Details:

To Reproduce

I ran the following:

python -m venv .venv
.venv/scripts/Activate.ps1
pip install -e .[dev,doc]
python -m pytest

Expected behavior

Ideally the tests should pass when running the full suite. Otherwise, the developer instructions should change to clarify which tests should be run locally

Context

Please, complete the following to better understand the system you are using to run MUSE.

dalonsoa commented 3 months ago

I think the files changing issue should now be solved after merging https://github.com/EnergySystemsModellingLab/MUSE_OS/pull/289 as all output files were upgraded, as well.

On the error in the tests when the full suite is run, I believe Ben get's a different error, in a notebook.

HarmonicReflux commented 3 months ago

To add visual context to the bug, the following is what I am returned upon running the fullset of pytests.

pytest1of1

pytest2of3

pytest3of3

tsmbland commented 3 months ago

Right, so I've just re-run this on my machine and everything now passes, presumably because of what Diego said above.

I think the reason it's failing for @HarmonicReflux is that the docs/Results folder, which is produced by the example-output.ipynb notebook, is no longer committed to the repo (we removed it in #287) so won't exist on your machine. If you run the example-output.ipynb notebook first, then try and run the tests then they should now pass.

I guess the reason it passes on github is that the example-output.ipynb notebook is run at some point before running that test, so the docs/Results folder is created (not sure, would have to investigate further)

It's a very good point though and we should fix this so pytest works locally without having to manually run the example-output.ipynb notebook first.

dalonsoa commented 3 months ago

Indeed, that's essentially the case. The default model is run first and then the Result folder linked to the right place. A but of a hack, to be honest, so probably we need to revisit that, so the right files are available when running the tests - or at least document it, so the user can follow the steps.

HarmonicReflux commented 3 months ago

I noted one can also create a detailed report of the tests and store them in a file like so. python -m pytest -rA > test_results.log For future reference, I attached test_results.log here. test_results.log

The muse_env/lib/python3.9/site-packages/jupyter_client/connect.py:221 warning is due to the line from jupyter_core.paths import jupyter_data_dir, jupyter_runtime_dir, secure_write which points tojupyter_core/init.py` and contains from __future__ import annotations from .version import __version__, version_info # noqa: F401

The failed test appears to be related to tests/test_filters.py:168 and xarray.

I will run muse --model default again. During install, I set up and purged mu MUSE environment quite some times, so after the final setup, I may have just not run that command which would explains the error.

Edit: The test log after running muse --model default is as follows: test_results_after_muse_model_default.log

The bottom line as far as the tests is concerned is now: ====== 304 passed, 10 skipped, 1 xfailed, 4 warnings in 571.78s (0:09:31) ====== # before running muse --model default ====== 304 passed, 10 skipped, 1 xfailed, 4 warnings in 486.62s (0:08:06) ====== # after running muse --model default

The the error regarding the .ipynb is alleviated (and necessitates to mention that in the documentation), but the overall passed/skipped/expected fail numbers are unchanged.