INT-NIT / BEP032tools

Software tools supporting the BIDS Extension Proposal (BEP) dedicated to adding support for electrophysiological data recorded in animal models (BEP032)
MIT License
3 stars 8 forks source link

Improving unit tests #13

Closed SylvainTakerkart closed 3 years ago

SylvainTakerkart commented 4 years ago

Hi,

I'd like to suggest a few things to improve the unit tests:

  1. we need to have unit test which do not use the datasets... you need a procedure that directly tests the functions; also, you need to have one test for each type of error!

  2. we also need to keep the tests on the datasets! and there, we need to improve the datasets to improve the tests:

    • we need one fully valid dataset (maybe several ones with different characteristics)
    • we need to construct datasets that deviates from the valid dataset but violate one and only one spec (one dataset for each type of error)
    • we need one or several datasets that violate several types of errors
SylvainTakerkart commented 4 years ago

@Slowblitz question: are all the datasets included in the repo really useful for the current version, or are the old ones useless now? (for ex because the specs have evolved)

SylvainTakerkart commented 4 years ago

@Slowblitz have you updated the datasets? (I forgot...)

Slowblitz commented 4 years ago

No not yet , added to TODO list at https://github.com/INT-NIT/AnDOChecker/projects/1

SylvainTakerkart commented 4 years ago

ok! then, for the datasets, I'm not sure you need to have both a directory like ds001, ds002 etc., and a subdirectory data inside each of them... having just ds002/exp-Landing would be enough (and better); and maybe renaming ds by dataset would be worth it also...

at the end, "dataset002/exp-Landing"? WDYT?

Slowblitz commented 4 years ago

Yes seems good to delete the "data" directory level of each dataset and renaming 'ds001' by 'dataset001' seems a good idea too.

SylvainTakerkart commented 3 years ago

Hi @Slowblitz ! I'm coming back to this issue now ;)

  1. It seems that in the latest version, none of the datasets available for testing are actually valid AnDO datasets... Could you please add one? (one that is not trivial, e.g that maybe has several subjects and several sessions per subject)

  2. Also, maybe it would be better to have one non-AnDO dataset for each type of error? (because for now, the datasets 1-10 seem to test very different things, that are mixed etc.)