Open Alxmrphi opened 3 days ago
Hi. Yes, as you say: " Is this a problem that the repeats are not across runs but only occur within individual runs?" ==> Yeah. GLMsingle wants to do cross validation across distinct runs. So it needs at least some repeats that occur in different runs. In your experiment is that possible? If not, one trick is to artificially split a run into two "run halves" before giving to GLMsingle
Thanks Kendrick, I will give that a shot!
I have a quick question. I'm playing around with GLMSingle for the first time on some new data we recently acquired. It's surface data from one hemisphere, 6 runs, one subject and there are a few repeats (within-session) of some stimulus images. This can be seen and recognised by the output of GLMSingle ("The number of trials for each condition ... " reports > 1 values as expected). However, GLMDenoise and fracridge is turned off as it also detects:
UserWarning: Since there are no repeats, standard cross-validation usage of <wantfracridge> cannot be performed.
(same for glmdenoise).I tried to load the DESIGNINFO.npy but am having trouble accessing that data. Doesn't look like a NumPy array, seems like a dict, but doesn't have a
keys
orlen
in the way I'm loading it at least. I thought inspection of that might reveal while it thinks there are no repeats.Anyway, I just want to know where to poke around to figure out where the confusion is. Am I right in understanding there is a contradiction to receive this message even when the diagnostics can clearly count the multiple trials per condition? Is this a problem that the repeats are not across runs but only occur within individual runs?
Any advice appreciated. Below is full trace. Also below is the param setting (taken from Python GLMSingle example)