Right now in several unit tests we load the BCI Competition IV 2a Dataset to then test windowing serialization etc.
In my view, this is an overkill and uses a lot of memory and runtime unnecessarily. We can just create smaller entirely fake data in those tests and this will speed up the unit tests quite a lot. Also makes it more appropriate for unit tests, like will not break in case anything changes about the MOABBDataset.
Right now in several unit tests we load the BCI Competition IV 2a Dataset to then test windowing serialization etc. In my view, this is an overkill and uses a lot of memory and runtime unnecessarily. We can just create smaller entirely fake data in those tests and this will speed up the unit tests quite a lot. Also makes it more appropriate for unit tests, like will not break in case anything changes about the
MOABBDataset
.Affected files under
test/unit_tests
:datautil/test_serialization.py
datasets/test_dataset.py
preprocessing/test_preprocess.py
preprocessing/test_windowers.py
samplers/test_samplers.py
training/test_scoring.py