lina-usc / pylossless

🧠 EEG Processing pipeline that annotates continuous data
https://pylossless.readthedocs.io/en/latest/
MIT License
25 stars 10 forks source link

MAINT: Setup a pytest fixture for reuse across tests #144

Closed scott-huberty closed 1 year ago

scott-huberty commented 1 year ago

It's taking a long time to run our test suite, and the reason is that in test_pipeline, multiple tests are literally re running the entire pipeline (including two ICA's!) on a file that isn't really a test file to begin with (it's more than a minute long).

This sets up a pytest fixture so that we can just run this file through the pipeline once, and reuse it across all of our tests. This will be especially handy for #138 , because that PR is currently running the pipeline on a file yet another time for the rejection tests.

On my local, the entire test suite is running less than 30 seconds. let's see this improves the testing time on the VM's.

codecov[bot] commented 1 year ago

Codecov Report

Merging #144 (4a4d0d9) into main (510b992) will decrease coverage by 9.49%. The diff coverage is 100.00%.

@@            Coverage Diff             @@
##             main     #144      +/-   ##
==========================================
- Coverage   77.63%   68.15%   -9.49%     
==========================================
  Files          12       13       +1     
  Lines         921      942      +21     
==========================================
- Hits          715      642      -73     
- Misses        206      300      +94     
Files Coverage Δ
pylossless/conftest.py 100.00% <100.00%> (ø)
pylossless/dash/tests/test_topo_viz.py 80.00% <100.00%> (-20.00%) :arrow_down:

... and 3 files with indirect coverage changes

:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more

scott-huberty commented 1 year ago

So actually the dash tests are the real culprit for our test suite taking so long now. test_topoviz alone is taking ~10 minutes. It is failing and there is something that needs to be debugged. Marking with pytest.mark.skip for now.