Closed hoechenberger closed 6 months ago
This makes me think that we should take our smallest dataset and set up a GitHub actions version of the tests that just run on that dataset.
So a rough check (ignore my cruft):
$ du -s ~/mne_data/ds00* | sort -rnk1
3883408 /home/larsoner/mne_data/ds001810
3474416 /home/larsoner/mne_data/ds000246
2407716 /home/larsoner/mne_data/ds003104
2286128 /home/larsoner/mne_data/ds000248
1864280 /home/larsoner/mne_data/ds004229
1766884 /home/larsoner/mne_data/ds000117
1690760 /home/larsoner/mne_data/ds000247
992692 /home/larsoner/mne_data/ds004107
891284 /home/larsoner/mne_data/ds000248_ica
851544 /home/larsoner/mne_data/ds003392
181484 /home/larsoner/mne_data/ds001971
30796 /home/larsoner/mne_data/ds003775
Looks like we could use 1971, 3775, or 3392. 3392 is our smallest MEG dataset and 1971 runs decoding so I'm inclined toward testing those.
@larsoner Turns out that my issues may have been caused by storing the output files in a OneDrive-synced folder. Maybe some batter's magic caused the caching to sometimes fail, or who knows what. At any rate, all seems to be fine now – I'm saving the pipeline output in a folder that's not synced to OneDrive.
I cannot provide sufficient information now – this issue is supposed to serve as a reminder.
I'm under the impression that caching is not working on my macOS machine – it seems that all steps are always run again, regardless of the caching setting. I also tried the
"hash"
method, but I'm not seeing any change. Needs to be investigated.