Open toncho11 opened 1 week ago
@PierreGtch
Hello @toncho11 Are you talking about the caching of the evaluation results (scores of each pipelines in a hdf5 file) or about the caching of the preprocessed data (as MNE files in a BIDS)?
Currently I am talking only about "caching of the evaluation results".
This is strange, the WithinSessionEvaluation
, CrossSessionEvaluation
and CrossSubjectEvaluation
should check which subjects were already evaluated and evaluate the ones that were not.
Could you create a minimal example to reproduce your issue (see https://stackoverflow.com/help/minimal-reproducible-example)? And provide the version of MOABB you are using
So one thing I noticed is that if I had previously pipeline A and then I run only pipeline B which is the same as A then in the results I get "A" instead of "B" although I expect to get B because this is the only one I am currently running. This is with cached results on.
What you now describe is the expected behaviour. MOABB checks if the results of a pipeline are already present in the cache using a hash key based on ‘repr(pipeline)’. See here: https://github.com/NeuroTechX/moabb/blob/2e3f2938a3645070e1a95fed26eb71ec1a39716e/moabb/analysis/results.py#L46
If you want to recompute the same results twice, the simplest solution is to move the results file between the two runs.
If I am running a pipeline named "A" I am expecting to get "A", not "B" even if they were the same. Also does it check if both were completed with the same number of subjects? Not the number, but the same set of subjects. Thanks!
If I am running a pipeline named "A" I am expecting to get "A", not "B" even if they were the same.
You should not expect that. This is not what is implemented.
Also does it check if both were completed with the same number of subjects? Not the number, but the same set of subjects. Thanks!
Yes it checks. It should only run the evaluation for the missing subjects.
There is no logic in naming something one way and somehow getting it with another name in the results. It is confusing.
I am still thinking if a reproducible example can be made for my initial problem.
There is no logic in naming something one way and somehow getting it with another name in the results. It is confusing.
@bruAristimunha @sylvchev what are your thoughts on this?
I am still thinking if a reproducible example can be made for my initial problem.
Thanks, until then it’s difficult to help you on this issue
It seems that once you switch to cached results in the evaluation for the next run the evaluation does not check the number of subjects. For example now it prints the cached results directly even if for one dataset it has never completed all the 5 subjects I have set to test. If I do not use cached results then it will process all the 5 subjects as expected. So with cached results one can get false results where not all subjects have particpated in the evaluation.
Cached results option uses the cached results without checking what is set in the current run of the evaluation? I am having a hard time understanding it.