It would be nice to have some tests that serve as sanity checks for whenever we make a change in the repo.
I would avoid testing small units, so that the interface can stay flexible if we need to change anything.
Implementation:
We should make a test, where a randomly selected model is run on a subset of 20Newsgroups and then export all results in .csv files,
then check with pandas whether the files are readable.
Rationale:
It would be nice to have some tests that serve as sanity checks for whenever we make a change in the repo. I would avoid testing small units, so that the interface can stay flexible if we need to change anything.
Implementation:
We should make a test, where a randomly selected model is run on a subset of 20Newsgroups and then export all results in
.csv
files, then check withpandas
whether the files are readable.