cvnlab / GLMsingle

A toolbox for accurate single-trial estimates in fMRI time-series data
BSD 3-Clause "New" or "Revised" License
101 stars 44 forks source link

Question: detecting repeats (and then not detecting them). What to try next? #159

Open Alxmrphi opened 3 days ago

Alxmrphi commented 3 days ago

I have a quick question. I'm playing around with GLMSingle for the first time on some new data we recently acquired. It's surface data from one hemisphere, 6 runs, one subject and there are a few repeats (within-session) of some stimulus images. This can be seen and recognised by the output of GLMSingle ("The number of trials for each condition ... " reports > 1 values as expected). However, GLMDenoise and fracridge is turned off as it also detects:

UserWarning: Since there are no repeats, standard cross-validation usage of <wantfracridge> cannot be performed. (same for glmdenoise).

I tried to load the DESIGNINFO.npy but am having trouble accessing that data. Doesn't look like a NumPy array, seems like a dict, but doesn't have a keys or len in the way I'm loading it at least. I thought inspection of that might reveal while it thinks there are no repeats.

Anyway, I just want to know where to poke around to figure out where the confusion is. Am I right in understanding there is a contradiction to receive this message even when the diagnostics can clearly count the multiple trials per condition? Is this a problem that the repeats are not across runs but only occur within individual runs?

Any advice appreciated. Below is full trace. Also below is the param setting (taken from Python GLMSingle example)


opt = dict()

# set important fields for completeness (but these would be enabled by default)
opt['wantlibrary'] = 1
opt['wantglmdenoise'] = 1
opt['wantfracridge'] = 1

# for the purpose of this example we will keep the relevant outputs in memory
# and also save them to the disk
opt['wantfileoutputs'] = [1,1,1,1]
opt['wantmemoryoutputs'] = [1,1,1,1]

# running python GLMsingle involves creating a GLM_single object
# and then running the procedure using the .fit() routine
glmsingle_obj = GLM_single(opt)

# visualize all the hyperparameters
print(glmsingle_obj.params)
*** DIAGNOSTICS ***:
There are 6 runs.
The number of conditions in this experiment is 300.
The stimulus duration corresponding to each trial is 2.50 seconds.
The TR (time between successive data points) is 2.00 seconds.
The number of trials in each run is: [75, 75, 75, 75, 75, 75].
The number of trials for each condition is: [np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(3), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0)].
For each condition, the number of runs in which it appears: [np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(1), np.int64(0), np.int64(0), np.int64(0), np.int64(0), np.int64(0)].
For each run, how much ending buffer do we have in seconds? [np.float64(16.0), np.float64(16.0), np.float64(16.0), np.float64(16.0), np.float64(16.0), np.float64(16.0)].
*** Saving design-related results to [./output/DESIGNINFO.npy.](https://file+.vscode-resource.vscode-cdn.net/Users/alxmrphi/Documents/Github/GLMSingleDemo/output/DESIGNINFO.npy.) ***
*** FITTING DIAGNOSTIC RUN-WISE FIR MODEL ***
*** Saving FIR results to [./output/RUNWISEFIR.npy.](https://file+.vscode-resource.vscode-cdn.net/Users/alxmrphi/Documents/Github/GLMSingleDemo/output/RUNWISEFIR.npy.) ***

*** FITTING TYPE-A MODEL (ONOFF) ***

fitting model...
done.

preparing output...
done.

computing model fits...
done.

computing R^2...
done.

computing SNR...
done.

[/Users/alxmrphi/miniforge3/envs/glmsingle_demo/lib/python3.10/site-packages/glmsingle/glmsingle.py:665](https://file+.vscode-resource.vscode-cdn.net/Users/alxmrphi/miniforge3/envs/glmsingle_demo/lib/python3.10/site-packages/glmsingle/glmsingle.py:665): UserWarning: None of your conditions occur in more than one run. Are you sure this is what you intend?
  warnings.warn(msg)
[/Users/alxmrphi/miniforge3/envs/glmsingle_demo/lib/python3.10/site-packages/glmsingle/glmsingle.py:675](https://file+.vscode-resource.vscode-cdn.net/Users/alxmrphi/miniforge3/envs/glmsingle_demo/lib/python3.10/site-packages/glmsingle/glmsingle.py:675): UserWarning: Since there are no repeats, standard cross-validation usage of <wantglmdenoise> cannot be performed. Setting <wantglmdenoise> to 0.
  warnings.warn(msg)
[/Users/alxmrphi/miniforge3/envs/glmsingle_demo/lib/python3.10/site-packages/glmsingle/glmsingle.py:682](https://file+.vscode-resource.vscode-cdn.net/Users/alxmrphi/miniforge3/envs/glmsingle_demo/lib/python3.10/site-packages/glmsingle/glmsingle.py:682): UserWarning: Since there are no repeats, standard cross-validation usage of <wantfracridge> cannot be performed. Setting <wantfracridge> to 0.
  warnings.warn(msg)

*** Saving results to [./output/TYPEA_ONOFF.npy.](https://file+.vscode-resource.vscode-cdn.net/Users/alxmrphi/Documents/Github/GLMSingleDemo/output/TYPEA_ONOFF.npy.) ***

*** Setting brain R2 threshold to 0.4098676520271428 ***

[/Users/alxmrphi/miniforge3/envs/glmsingle_demo/lib/python3.10/site-packages/sklearn/mixture/_base.py:270](https://file+.vscode-resource.vscode-cdn.net/Users/alxmrphi/miniforge3/envs/glmsingle_demo/lib/python3.10/site-packages/sklearn/mixture/_base.py:270): ConvergenceWarning: Best performing initialization did not converge. Try different init parameters, or increase max_iter, tol, or check for degenerate data.
  warnings.warn(
*** FITTING TYPE-B MODEL (FITHRF) ***

chunks: 100%|██████████| 1/1 [00:03<00:00,  3.49s/it]
kendrickkay commented 3 days ago

Hi. Yes, as you say: " Is this a problem that the repeats are not across runs but only occur within individual runs?" ==> Yeah. GLMsingle wants to do cross validation across distinct runs. So it needs at least some repeats that occur in different runs. In your experiment is that possible? If not, one trick is to artificially split a run into two "run halves" before giving to GLMsingle

Alxmrphi commented 2 days ago

Thanks Kendrick, I will give that a shot!