NeuroTechX / moabb

Mother of All BCI Benchmarks
https://neurotechx.github.io/moabb/
BSD 3-Clause "New" or "Revised" License
701 stars 181 forks source link

Problem with cached results #628

Closed toncho11 closed 4 months ago

toncho11 commented 5 months ago

It seems that once you switch to cached results in the evaluation for the next run the evaluation does not check the number of subjects. For example now it prints the cached results directly even if for one dataset it has never completed all the 5 subjects I have set to test. If I do not use cached results then it will process all the 5 subjects as expected. So with cached results one can get false results where not all subjects have particpated in the evaluation.

Cached results option uses the cached results without checking what is set in the current run of the evaluation? I am having a hard time understanding it.

bruAristimunha commented 5 months ago

@PierreGtch

PierreGtch commented 5 months ago

Hello @toncho11 Are you talking about the caching of the evaluation results (scores of each pipelines in a hdf5 file) or about the caching of the preprocessed data (as MNE files in a BIDS)?

toncho11 commented 5 months ago

Currently I am talking only about "caching of the evaluation results".

PierreGtch commented 5 months ago

This is strange, the WithinSessionEvaluation, CrossSessionEvaluation and CrossSubjectEvaluation should check which subjects were already evaluated and evaluate the ones that were not.

Could you create a minimal example to reproduce your issue (see https://stackoverflow.com/help/minimal-reproducible-example)? And provide the version of MOABB you are using

toncho11 commented 5 months ago

So one thing I noticed is that if I had previously pipeline A and then I run only pipeline B which is the same as A then in the results I get "A" instead of "B" although I expect to get B because this is the only one I am currently running. This is with cached results on.

PierreGtch commented 5 months ago

What you now describe is the expected behaviour. MOABB checks if the results of a pipeline are already present in the cache using a hash key based on ‘repr(pipeline)’. See here: https://github.com/NeuroTechX/moabb/blob/2e3f2938a3645070e1a95fed26eb71ec1a39716e/moabb/analysis/results.py#L46

If you want to recompute the same results twice, the simplest solution is to move the results file between the two runs.

toncho11 commented 5 months ago

If I am running a pipeline named "A" I am expecting to get "A", not "B" even if they were the same. Also does it check if both were completed with the same number of subjects? Not the number, but the same set of subjects. Thanks!

PierreGtch commented 5 months ago

If I am running a pipeline named "A" I am expecting to get "A", not "B" even if they were the same.

You should not expect that. This is not what is implemented.

Also does it check if both were completed with the same number of subjects? Not the number, but the same set of subjects. Thanks!

Yes it checks. It should only run the evaluation for the missing subjects.

toncho11 commented 5 months ago

There is no logic in naming something one way and somehow getting it with another name in the results. It is confusing.

I am still thinking if a reproducible example can be made for my initial problem.

PierreGtch commented 5 months ago

There is no logic in naming something one way and somehow getting it with another name in the results. It is confusing.

@bruAristimunha @sylvchev what are your thoughts on this?

I am still thinking if a reproducible example can be made for my initial problem.

Thanks, until then it’s difficult to help you on this issue

toncho11 commented 4 months ago

If I run 2 subjects and then I unable cached results and switch to 10 subjects to process I get some results quickly. But if I run 10 subjects with caching of results disabled I get a different results.

PierreGtch commented 4 months ago

Could you provide the minimal code to reproduce this this?

toncho11 commented 4 months ago

So what happens is that MOABB is giving me the same results when I change the number of subjects

For example when I do:

dataset.subject_list = dataset.subject_list[0:5]

or

dataset.subject_list = dataset.subject_list[0:10]

the results are the same.

The results object from evaluate.process always contains all of the subjects previously processed for the selected pipelines. It does not take into account that I modified the number of subjects for this new run. So this way you always get the results with the maximum number of subjects already processed even if I try to reduce the number of subjects.

PierreGtch commented 4 months ago

evaluate.process returns the results as a pandas.DataFrame. If you only want the results of certain subjects you already computed, you can filter the dataframe like this:

results_filtered = results[results.subject.isin([1,2,3,4,5])]
toncho11 commented 4 months ago

If I am asking evaluation for only 2 subjects then results object should not return for 10 in the first place? I mean it is confusing.

PierreGtch commented 4 months ago

Restricting the number of subjects as you did is not a supported feature, it’s a hack. We welcome all new contributions, in case you want to add a feature :)

toncho11 commented 4 months ago

I understand. Additional code is needed to see which subjects were cached and which are requested at the next run. Thanks :)