Closed toncho11 closed 4 months ago
@PierreGtch
Hello @toncho11 Are you talking about the caching of the evaluation results (scores of each pipelines in a hdf5 file) or about the caching of the preprocessed data (as MNE files in a BIDS)?
Currently I am talking only about "caching of the evaluation results".
This is strange, the WithinSessionEvaluation
, CrossSessionEvaluation
and CrossSubjectEvaluation
should check which subjects were already evaluated and evaluate the ones that were not.
Could you create a minimal example to reproduce your issue (see https://stackoverflow.com/help/minimal-reproducible-example)? And provide the version of MOABB you are using
So one thing I noticed is that if I had previously pipeline A and then I run only pipeline B which is the same as A then in the results I get "A" instead of "B" although I expect to get B because this is the only one I am currently running. This is with cached results on.
What you now describe is the expected behaviour. MOABB checks if the results of a pipeline are already present in the cache using a hash key based on ‘repr(pipeline)’. See here: https://github.com/NeuroTechX/moabb/blob/2e3f2938a3645070e1a95fed26eb71ec1a39716e/moabb/analysis/results.py#L46
If you want to recompute the same results twice, the simplest solution is to move the results file between the two runs.
If I am running a pipeline named "A" I am expecting to get "A", not "B" even if they were the same. Also does it check if both were completed with the same number of subjects? Not the number, but the same set of subjects. Thanks!
If I am running a pipeline named "A" I am expecting to get "A", not "B" even if they were the same.
You should not expect that. This is not what is implemented.
Also does it check if both were completed with the same number of subjects? Not the number, but the same set of subjects. Thanks!
Yes it checks. It should only run the evaluation for the missing subjects.
There is no logic in naming something one way and somehow getting it with another name in the results. It is confusing.
I am still thinking if a reproducible example can be made for my initial problem.
There is no logic in naming something one way and somehow getting it with another name in the results. It is confusing.
@bruAristimunha @sylvchev what are your thoughts on this?
I am still thinking if a reproducible example can be made for my initial problem.
Thanks, until then it’s difficult to help you on this issue
If I run 2 subjects and then I unable cached results and switch to 10 subjects to process I get some results quickly. But if I run 10 subjects with caching of results disabled I get a different results.
Could you provide the minimal code to reproduce this this?
So what happens is that MOABB is giving me the same results when I change the number of subjects
For example when I do:
dataset.subject_list = dataset.subject_list[0:5]
or
dataset.subject_list = dataset.subject_list[0:10]
the results are the same.
The results object from evaluate.process always contains all of the subjects previously processed for the selected pipelines. It does not take into account that I modified the number of subjects for this new run. So this way you always get the results with the maximum number of subjects already processed even if I try to reduce the number of subjects.
evaluate.process
returns the results as a pandas.DataFrame
. If you only want the results of certain subjects you already computed, you can filter the dataframe like this:
results_filtered = results[results.subject.isin([1,2,3,4,5])]
If I am asking evaluation for only 2 subjects then results object should not return for 10 in the first place? I mean it is confusing.
Restricting the number of subjects as you did is not a supported feature, it’s a hack. We welcome all new contributions, in case you want to add a feature :)
I understand. Additional code is needed to see which subjects were cached and which are requested at the next run. Thanks :)
It seems that once you switch to cached results in the evaluation for the next run the evaluation does not check the number of subjects. For example now it prints the cached results directly even if for one dataset it has never completed all the 5 subjects I have set to test. If I do not use cached results then it will process all the 5 subjects as expected. So with cached results one can get false results where not all subjects have particpated in the evaluation.
Cached results option uses the cached results without checking what is set in the current run of the evaluation? I am having a hard time understanding it.