Closed smeisler closed 2 years ago
Hi Steven,
Sorry for the delayed response. I haven't seen that error before. Can I take a look at the data you're inputting to the BayesGLM_cifti
function so I can try to get an idea of what's happening here? The error looks like a dimension mismatch, but we have a number of checks for that in our function, so I'm surprised to see a message like that. If I can't figure it out, we can try to send a minimum reproducible example to the INLA team and see what they make of it.
Thanks for being our brave tester! Dan
Hi @danieladamspencer,
of course! All of the relevant files (+some extras) are here : https://drive.google.com/file/d/1m2bPZGHVwPN8l9Aehx2Hjhf2PBC5p1Tv/view
note that the input gifti I use is the template tpl-fslr mid thickness file. Besides that, I think the file names will be informative enough. If it would help, I can send over the code I use to load the nuisance regressors and events too.
best, Steven
Yes, sharing your code would be very helpful, thank you!
Dan
Sounds good, when I have a moment I will clean it up a bit and send it over!
Hi @danieladamspencer ,
Code is attached! It should work after updating the relevant /PATH/TO
paths in the beginning. Let me know if there are any errors. I had to change the file extension from .r to .txt to upload here. There are two versions of the bayesglm command, one doing only the first session, and another doing all three sessions. If you could test both versions, that would be appreciated!
Thanks, Steven
Hi @smeisler ,
There are a few issues in the code that you sent, so it won't run as-is. I did a few fixes, and here's where I stand now:
1) The number of rows of nuisance data is different than the number of rows in your cifti data, so that causes one break.
2) Your data have an interesting effect where you have a number of vertices with very low variances (but nonzero!) and very low means (one has a mean less than 0.01). This is conflicting with one of our checks to get scale the time series. We will need to think about how to handle this case.
3) When I tried to prewhiten your data, I received an error stemming from ciftiTools
, so I'll need to check on that one, too.
I think problem (2) may be contributing to a computationally singular system within INLA. I'm going to try another cutoff and see if that produces more favorable results.
I'll keep you updated, Dan
Hi Dan,
1) Hmm, that seems strange. On my end, the first two runs seem to have 130 timepoints and 130 rows of nuisances, while the last run has 123 for both. Which run(s) did you have problems with? I'm also curious what needed to be fixed since it ran without error on my end. 2) Yes I noticed that too, so I turned the bold scaling to FALSE temporarily. 3) That also seems strange since prewhitening appears to finish for me based on the full traceback.
Either way, thank you for looking in to this, and I am happy to help get to the bottom of this in anyway I can!
Steven
As an update, I temporally turned off prewhitening and smoothing, and got further, but reached the following error. In short, it looks like the model argument is supposed to be a string, but is not surrounded by "" here. See the INLA f
documentation.
SETTING UP DATA
Reading in data for session 1
Constructing design matrix for session 1
RUNNING MODELS
... LEFT CORTEX
...... estimating model with INLA
Error in INLA::f(beta1, model = spde, replicate = repl1, hyper = list(theta = list(initial = c(-2, : object 'spde' not found
Traceback:
1. BayesGLM_cifti(cifti_fname = fnames_ts[1], surfL_fname = fname_gifti_left,
. surfR_fname = fname_gifti_right, brainstructures = c("left",
. "right"), onsets = events_1, TR = TR, nuisance = confounds_1,
. nuisance_include = c("drift", "dHRF"), scale_BOLD = TRUE,
. scale_design = TRUE, GLM_method = "Bayesian", ar_order = 0,
. ar_smooth = NULL, resamp_res = 10000, num.threads = parallel::detectCores() -
. 2, verbose = TRUE, outfile = "/om4/group/gablab/data/ds003126/code/bayesfmri/BayesfMRI_fmriprep_test",
. return_INLA_result = TRUE, avg_sessions = TRUE, trim_INLA = TRUE,
. keep = FALSE, )
2. BayesGLM(data = session_data, beta_names = beta_names, vertices = verts_left,
. faces = faces_left, scale_BOLD = scale_BOLD_left, scale_design = FALSE,
. num.threads = num.threads, return_INLA_result = return_INLA_result,
. outfile = outfile_left, verbose = verbose, avg_sessions = avg_sessions,
. trim_INLA = trim_INLA, keep = keep, twostage = twostage)
3. system.time(INLA_result <- estimate_model(formula = formula,
. data = model_data, A = model_data$X, spde, prec_initial = 1,
. num.threads = num.threads, verbose = verbose, contrasts = contrasts,
. keep = keep, twostage = twostage))
4. estimate_model(formula = formula, data = model_data, A = model_data$X,
. spde, prec_initial = 1, num.threads = num.threads, verbose = verbose,
. contrasts = contrasts, keep = keep, twostage = twostage)
5. inla(formula, data = data, control.predictor = list(A = A, compute = TRUE),
. verbose = verbose, keep = keep, num.threads = num.threads,
. control.inla = list(strategy = "gaussian", int.strategy = int.strategy),
. control.family = list(hyper = list(prec = list(initial = prec_initial))),
. control.compute = list(config = TRUE), contrasts = contrasts,
. lincomb = lincomb)
6. inla.core(formula = formula, family = family, contrasts = contrasts,
. data = data, quantiles = quantiles, E = E, offset = offset,
. scale = scale, weights = weights, Ntrials = Ntrials, strata = strata,
. lp.scale = lp.scale, link.covariates = link.covariates, verbose = verbose,
. lincomb = lincomb, selection = selection, control.compute = control.compute,
. control.predictor = control.predictor, control.family = control.family,
. control.inla = control.inla, control.fixed = control.fixed,
. control.mode = control.mode, control.expert = control.expert,
. control.hazard = control.hazard, control.lincomb = control.lincomb,
. control.update = control.update, control.lp.scale = control.lp.scale,
. control.pardiso = control.pardiso, only.hyperparam = only.hyperparam,
. inla.call = inla.call, inla.arg = inla.arg, num.threads = num.threads,
. blas.num.threads = blas.num.threads, keep = keep, working.directory = working.directory,
. silent = silent, inla.mode = inla.mode, safe = FALSE, debug = debug,
. .parent.frame = .parent.frame)
7. inla.interpret.formula(formula, data.same.len = data.same.len,
. data = data, data.model = data.model, parent.frame = .parent.frame)
8. eval(parse(text = gsub("^f\\(", "INLA::f(", terms[i])), envir = data,
. enclos = p.env)
9. eval(parse(text = gsub("^f\\(", "INLA::f(", terms[i])), envir = data,
. enclos = p.env)
10. INLA::f(beta1, model = spde, replicate = repl1, hyper = list(theta = list(initial = c(-2,
. 2))))
Hey @smeisler, out of curiousity: What OS are you using? I'm not getting this error on my Mac, but I have been setting up a separate test on my Linux machine, and I see the same thing.
Linux CentOs7. Since I am on R 4.1, I used the latest INLA testing branch. I also installed the CentOs7 INLA binaries (instead of default Ubuntu).
Okay, I have a temporary idea for you: I have been working on an EM implementation of the Bayesian GLM in branch 1.8.EM. You should be able to run the analysis with GLM_method = "EM"
. Give that a try and let me know if it works.
EDIT: To clarify, I can get the EM implementation working on my Ubuntu machine, so I think it should work for you, too.
I have installed the EM branch, thanks!
For some reason, the code is ignoring my scale_BOLD = FALSE
input, and I cannot get past the warning that Error in scale_timeseries(t(BOLD_s), scale = scale_BOLD, transpose = FALSE): Some local means are less than 0.1. Please set scale_BOLD = FALSE.
Did you test this on my data, by any chance, and get past the scaling step?
My command:
# SINGLE SESSION BayesGLM
single_session <- BayesGLM_cifti(
cifti_fname = fnames_ts[1],
surfL_fname = fname_gifti_left,
surfR_fname = fname_gifti_right,
brainstructures = c("left", "right"),
onsets = events_1,
TR = TR,
nuisance = confounds_1,
nuisance_include = c("drift", "dHRF"),
scale_BOLD = FALSE,
scale_design = TRUE,
GLM_method = "EM",
ar_order = 6,
ar_smooth = NULL,
session_names = c('run-1'),
resamp_res = 10000,
num.threads = parallel::detectCores() - 2,
verbose = TRUE,
outfile = outfile,
return_INLA_result = TRUE,
avg_sessions = FALSE,
trim_INLA = TRUE
)
When I set the ar_order
to 0 I get past this error and the code "proceeds to completion" but with the spde error I mentioned earlier. I think the relevant code is in line 865 of R/BayesGLM.R
, in which spde
needs to be surrounded by quotes to be recognized as an INLA model.
SETTING UP DATA
READING IN CIFTI DATA
MAKING DESIGN MATRICES
RUNNING MODELS
... LEFT CORTEX
...... estimating model with INLA
Timing stopped at: 0.277 0.178 0.456
Error in INLA::f(beta1, model = spde, replicate = repl1, hyper = list(theta = list(initial = c(-2, :
object 'spde' not found
... RIGHT CORTEX
...... estimating model with INLA
Timing stopped at: 0.198 0.152 0.346
Error in INLA::f(beta1, model = spde, replicate = repl1, hyper = list(theta = list(initial = c(-2, :
object 'spde' not found
PUTTING RESULTS IN CIFTI FORMAT
DONE!
Hi Steven,
I see you submitted a patch with this change. Did putting spde in quotes work for you?
Mandy
On Fri, Jan 21, 2022, 5:44 PM Steven Meisler @.***> wrote:
When I set the ar_order to 0 I get past this error and the code "proceeds to completion" but with the spde error I mentioned earlier. I think the relevant code is in line 865 of R/BayesGLM.R, in which spde needs to be surrounded by quotes to be recognized as an INLA model.
SETTING UP DATA READING IN CIFTI DATA MAKING DESIGN MATRICES
RUNNING MODELS
... LEFT CORTEX
...... estimating model with INLA Timing stopped at: 0.277 0.178 0.456
Error in INLA::f(beta1, model = spde, replicate = repl1, hyper = list(theta = list(initial = c(-2, : object 'spde' not found
... RIGHT CORTEX
...... estimating model with INLA Timing stopped at: 0.198 0.152 0.346
Error in INLA::f(beta1, model = spde, replicate = repl1, hyper = list(theta = list(initial = c(-2, : object 'spde' not found
PUTTING RESULTS IN CIFTI FORMAT
DONE!
— Reply to this email directly, view it on GitHub https://github.com/mandymejia/BayesfMRI/issues/17#issuecomment-1018918369, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHVKUW7XLB2EQQEQDQYTDDUXHOVXANCNFSM5MK5VOBQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you are subscribed to this thread.Message ID: @.***>
The PR was merged and it seemed to get past that error, but then it raises a new error about the length of the model not being defined (related to the missing n
argument to f
).
Hi @smeisler and @mandymejia ,
I just accepted the pull request, but now I get the error Argument 'n' in f() is required for model: spde
. @damondpham will take the bug fixing from here on the master branch, I believe.
Steven, on the 1.8.EM branch, you are getting a message about estimating the model with INLA, which only happens when GLM_method = "Bayesian"
on that branch. Are you sure you reinstalled the package and changed the argument to GLM_method = "EM"
? That might get you some results at least.
Best, Dan
I believe that's what I did, unless I installed the branch incorrectly (remotes::install_github("mandymejia/BayesfMRI@1.8.EM",force=TRUE
).
I get the following errors when using GLM_method='EM'
and scale_BOLD=FALSE
.
ar_order
= 6scale_BOLD argument is apparently ignored and I cannot get paste the previous error Error in scale_timeseries(t(BOLD_s), scale = scale_BOLD, transpose = FALSE): Some local means are less than 0.1. Please set scale_BOLD = FALSE.
ar_order
= 0The code has been running for over an hour now for a single subject single session run, which seems abnormal based on the benchmarking in the paper. Will update with outcomes.
Hi Steven,
Thank you for your patience and persistence.
The EM branch is still under development, so there are a few bugs to work out. Dan has been working on an EM method to serve as an alternative to INLA. INLA is fast but can be finicky, as you've seen. Since you're having trouble with INLA on your Linux OS, the EM implementation might work for you. However, since it's still under development, you'll probably have to forego prewhitening for now on the EM branch until we work out that bug you found.
If running on a Mac is an option, you could try that using the master branch (1.8), which is more fully tested. We are currently testing 1.9 and plan to deploy that soon.
One question about your analysis: how many task conditions do you have? That will make a big difference for computation time. Another thing you can try to improve computation time is to resample to a lower resolution, say 5k.
Best, Mandy
On Sat, Jan 22, 2022, 10:55 AM Steven Meisler @.***> wrote:
I believe that's what I did, unless I installed the branch incorrectly ( @.***",force=TRUE).
I get the following errors when using GLM_method='EM' and scale_BOLD=FALSE . When ar_order = 6
scale_BOLD argument is apparently ignored and I cannot get paste the previous error Error in scale_timeseries(t(BOLD_s), scale = scale_BOLD, transpose = FALSE): Some local means are less than 0.1. Please set scale_BOLD = FALSE. When ar_order = 0
The code has been running for over an hour now for a single subject single session run, which seems abnormal based on the benchmarking in the paper. Will update with outcomes.
— Reply to this email directly, view it on GitHub https://github.com/mandymejia/BayesfMRI/issues/17#issuecomment-1019297018, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHVKUWZASH4ZOIU35LEUQTUXLHQBANCNFSM5MK5VOBQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID: @.***>
Hello,
No problem! I appreciate all of your attention to this and am really excited to get some results. I might try running on a Mac for testing, but it will not be sustainable as I scale up the analysis. I have two task conditions. I will also try downsampling. Out of curiosity, do you know what RAM and sampling resolutions were used for the benchmarking? I see that it used a 6 threads Mac pro, but am not sure about the RAM and sampling. My current run on linux is now ~2 hours in on 40 threads and ~60GB RAM for a single session :( I guess that just highlights the efficiency of INLA.
Thanks, Steven
Hi Steven,
That doesn't sound right. For two tasks at that resolution, it should be a lot faster and more efficient. It's possible that the low-variance vertices in your data are causing issues for model convergence.
I believe Damon is going to take a stab at your analysis on the 1.8 branch and see if there are issues beyond the possible INLA problems on your Linux OS. In the meantime, can you try running on your Mac using the master or 1.9 branch and see if it works? If not, we may try raising the threshold for low-variance vertices to exclude from the analysis.
Best, Mandy
On Sat, Jan 22, 2022, 11:43 AM Steven Meisler @.***> wrote:
Hello,
No problem! I appreciate all of your attention to this and am really excited to get some results. I might try running on a Mac for testing, but it will not be sustainable as I scale up the analysis. I have two task conditions. I will also try downsampling. Out of curiosity, do you know what RAM and sampling resolutions were used for the benchmarking? I see that it used a 6 threads Mac pro, but am not sure about the RAM and sampling. My current run on linux is now ~2 hours in on 40 threads and ~60GB RAM for a single session :(
Thanks, Steven
— Reply to this email directly, view it on GitHub https://github.com/mandymejia/BayesfMRI/issues/17#issuecomment-1019306105, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHVKUU5LLVHX2ZOCCQRUYTUXLNEXANCNFSM5MK5VOBQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID: @.***>
Hi Steven,
Damon & Dan have been working on some changes in the package to make sure you can run your analysis. We will be moving from version 1.8 to version 1.9 shortly on the master branch, once Damon is finished running all of our test scenarios. We've been working on 1.9 for a while – it has a lot of improvements to the code structure – and are running final tests now. For now, Damon has run your single-session analysis on 1.9 and it completes without errors. I'll let him follow up to confirm and provide additional information.
If you still have issues using INLA on your Linux distribution, Dan is finalizing a completely INLA-free implementation using EM. Previously the EM implementation still relied on some INLA functions for the sparse matrix computations, but Dan has replaced all of these, so that you won't need to have INLA loaded or even installed with this version. We expect that to be ready shortly and will be on a new branch called 1.9EM.
So to summarize, you should be able to use 1.9 or 1.9EM very shortly to run your analyses. Damon will let you know when 1.9 is ready for you to install and try on your end.
Thanks again for being a beta tester! We really appreciate your feedback on the package.
Mandy
https://inboxwhenready.org/?utm_campaign=signature&utm_medium=email&utm_source=signature I'm using Inbox When Ready https://inboxwhenready.org/?utm_campaign=signature&utm_medium=email&utm_source=signature to protect my focus.
On Sat, Jan 22, 2022 at 1:17 PM Amanda Mejia @.***> wrote:
Hi Steven,
That doesn't sound right. For two tasks at that resolution, it should be a lot faster and more efficient. It's possible that the low-variance vertices in your data are causing issues for model convergence.
I believe Damon is going to take a stab at your analysis on the 1.8 branch and see if there are issues beyond the possible INLA problems on your Linux OS. In the meantime, can you try running on your Mac using the master or 1.9 branch and see if it works? If not, we may try raising the threshold for low-variance vertices to exclude from the analysis.
Best, Mandy
On Sat, Jan 22, 2022, 11:43 AM Steven Meisler @.***> wrote:
Hello,
No problem! I appreciate all of your attention to this and am really excited to get some results. I might try running on a Mac for testing, but it will not be sustainable as I scale up the analysis. I have two task conditions. I will also try downsampling. Out of curiosity, do you know what RAM and sampling resolutions were used for the benchmarking? I see that it used a 6 threads Mac pro, but am not sure about the RAM and sampling. My current run on linux is now ~2 hours in on 40 threads and ~60GB RAM for a single session :(
Thanks, Steven
— Reply to this email directly, view it on GitHub https://github.com/mandymejia/BayesfMRI/issues/17#issuecomment-1019306105, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHVKUU5LLVHX2ZOCCQRUYTUXLNEXANCNFSM5MK5VOBQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID: @.***>
-- AMANDA F MEJIA, PhD Assistant Professor Department of Statistics Indiana University https://mandymejia.com/
Hi Mandy,
This is fantastic news, thank you for the update! It is especially helpful for me that the INLA dependency is being replaced, as my intel-based Mac was having trouble with it (apparently the R 4.1 INLA build only works for Apple silicon Macs at this time). Once working, I would be happy to provide my fMRIPrep example code. Between that and the HCP tutorial, I think it would cover a wide range of users.
Best, Steven
Yes, we are excited to have more bases covered so more people can use the package without issues. INLA is a wonderful tool, but we've had our share of challenges with it. Our plan is to continue to have one package version that relies on INLA, since it's often the fastest option from a computational perspective, and one version that does not. Within the lab we have Mac Intel and M1 machines, as well as a Linux system we sometimes use, so we completely sympathize with the challenges of getting INLA to run across all of these architectures. Not to mention Windows users, who are completely out of luck!
Thank you for being willing to share your fMRIPrep example code – it's great that fMRIprep is making surface processing more accessible & commonplace. That would indeed be a valuable addition to our demos. And we would of course give you credit.
https://inboxwhenready.org/?utm_campaign=signature&utm_medium=email&utm_source=signature I'm using Inbox When Ready https://inboxwhenready.org/?utm_campaign=signature&utm_medium=email&utm_source=signature to protect my focus.
On Wed, Jan 26, 2022 at 11:24 AM Steven Meisler @.***> wrote:
Hi Mandy,
This is fantastic news, thank you for the update! It is especially helpful for me that the INLA dependency is being replaced, as my intel-based Mac was having trouble with it (apparently the R 4.1 INLA build only works for Apple silicon Macs at this time). Once working, I would be happy to provide my fMRIPrep example code. Between that and the HCP tutorial, I think it would cover a wide range of users.
Best, Steven
— Reply to this email directly, view it on GitHub https://github.com/mandymejia/BayesfMRI/issues/17#issuecomment-1022365209, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHVKUURWORD5D74S4NE3XDUYAN4JANCNFSM5MK5VOBQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID: @.***>
-- AMANDA F MEJIA, PhD Assistant Professor Department of Statistics Indiana University https://mandymejia.com/
Hi Steven!
I've made several fixes and updates to 1.9, and I was able to run your script on it (thanks for providing it!).
Single- or multi-session, scaling BOLD or not, prewhitening or not--it should work in all these cases now! So please let me know if you see otherwise. I'll be checking GitHub frequently so don't be afraid to reach out.
One thing to note: for your BOLD data, there are some data vertices on the underside of the medial wall with low but nonzero mean and variance. You can visualize them with something like this:
xii <- read_xifti(fnames_ts[1])
xii <- apply_xifti(xii, 1, mean)
plot(xii)
These will cause problems, especially if you opt to scale the data. So we added arguments meanTol
and varTol
to control the mean and variance thresholds for masking out data locations. You should set meanTol
to be .1
or greater. Then, BayesGLM will automatically mask these locations outside of the modelling area.
Hi @damondpham,
Thank you! I just installed the 1.9 branch and ran a test. Unfortunately, I am getting an error, but at least I can see that low mean areas are getting excluded. A single session example is below:
#1.9 SINGLE-SESSION
results <- BayesGLM_cifti(
fnames_ts[1],
surfL_fname = fname_gifti_left,
surfR_fname = fname_gifti_right,
brainstructures = c("left", "right"),
onsets = events_1,
TR = TR,
nuisance = confounds_1,
nuisance_include = c("drift", "dHRF"),
scale_BOLD = TRUE,
scale_design = TRUE,
Bayes = TRUE,
ar_order = 6,
ar_smooth = 5,
resamp_res = 10000,
num.threads = parallel::detectCores() - 2,
verbose = TRUE,
outfile = outfile,
return_INLA_result = TRUE,
avg_sessions = TRUE,
session_names = c('run-1'),
meanTol = 0.1,
varTol = 1e-06,
trim_INLA = TRUE
)
SETTING UP DATA
MAKING DESIGN MATRICES
RUNNING MODELS
.. LEFT CORTEX ANALYSIS
848 locations removed due to NA or NaN values .
97 additional locations removed due to low mean .
Error in if (nrow(z$vertices) != resamp_res) {: argument is of length zero
Traceback:
1. BayesGLM_cifti(fnames_ts[1], surfL_fname = fname_gifti_left,
. surfR_fname = fname_gifti_right, brainstructures = c("left",
. "right"), onsets = events_1, TR = TR, nuisance = confounds_1,
. nuisance_include = c("drift", "dHRF"), scale_BOLD = TRUE,
. scale_design = TRUE, Bayes = TRUE, ar_order = 6, ar_smooth = 5,
. resamp_res = 10000, num.threads = parallel::detectCores() -
. 2, verbose = TRUE, outfile = outfile, return_INLA_result = TRUE,
. avg_sessions = TRUE, session_names = c("run-1"), meanTol = 0.1,
. varTol = 1e-06, trim_INLA = TRUE)
2. BayesGLM(data = session_data, beta_names = beta_names, mesh = NULL,
. vertices = surf_list[[each_hem]]$vertices, faces = surf_list[[each_hem]]$faces,
. scale_BOLD = scale_BOLD, scale_design = FALSE, Bayes = do_Bayesian,
. ar_order = ar_order, ar_smooth = ar_smooth, num.threads = num.threads,
. return_INLA_result = return_INLA_result, outfile = outfile_name,
. verbose = verbose, avg_sessions = avg_sessions, meanTol = meanTol,
. varTol = varTol, trim_INLA = trim_INLA)
3. pw_smooth(vertices = mesh$loc, faces = mesh$graph$tv, AR = avg_AR,
. var = avg_var, FWHM = ar_smooth)
4. suppressWarnings(smooth_cifti(AR_xif, surf_FWHM = FWHM))
5. withCallingHandlers(expr, warning = function(w) if (inherits(w,
. classes)) tryInvokeRestart("muffleWarning"))
6. smooth_cifti(AR_xif, surf_FWHM = FWHM)
7. do.call(read_xifti, read_xifti_args)
8. (function (cifti_fname = NULL, surfL_fname = NULL, surfR_fname = NULL,
. brainstructures = c("left", "right"), idx = NULL, resamp_res = NULL,
. flat = FALSE, mwall_values = c(NA, NaN), verbose = FALSE,
. ...)
. {
. read_cifti(cifti_fname = cifti_fname, surfL_fname = surfL_fname,
. surfR_fname = surfR_fname, brainstructures = brainstructures,
. idx = idx, resamp_res = resamp_res, flat = flat, mwall_values = mwall_values,
. verbose = verbose, ...)
. })(cifti_fname = "/tmp/Rtmp8SVrWf/smoothed.dscalar.nii", brainstructures = "left",
. surfL_fname = "/tmp/Rtmp8SVrWf/left.surf.gii")
9. read_cifti(cifti_fname = cifti_fname, surfL_fname = surfL_fname,
. surfR_fname = surfR_fname, brainstructures = brainstructures,
. idx = idx, resamp_res = resamp_res, flat = flat, mwall_values = mwall_values,
. verbose = verbose, ...)
10. read_cifti_convert(cifti_fname, surfL_fname = surfL_fname, surfR_fname = surfR_fname,
. brainstructures = brainstructures, idx = idx, mwall_values = mwall_values,
. verbose = verbose, ...)
11. add_surf(xifti, surfL = surfL_fname)
Sorry I forgot to add--could you also install ciftiTools from GitHub's 8.0 branch and use that? I made necessary fixes there too.
devtools::install_github("mandymejia/ciftiTools", "8.0")
Hi,
I am unfortunately back to getting the undefined 'spde' object again (I am on my linux machine). Is there an EM implementation on the 1.9 branch I can use to bypass INLA, so I can try on my Mac?
SETTING UP DATA
.. reading in data for session 1
.. reading in data for session 2
.. reading in data for session 3
MAKING DESIGN MATRICES
RUNNING MODELS
.. LEFT CORTEX ANALYSIS
848 locations removed due to NA or NaN values in at least one scan.
101 additional locations removed due to low mean in at least one scan.
.... prewhitening... done!
.... estimating model with INLA
Error in INLA::f(beta1, model = spde, replicate = repl1, hyper = list(theta = list(initial = c(-2, : object 'spde' not found
Traceback:
1. BayesGLM_cifti(fnames_ts, surfL_fname = fname_gifti_left, surfR_fname = fname_gifti_right,
. brainstructures = c("left", "right"), onsets = events_all,
. TR = TR, nuisance = confounds_all, nuisance_include = c("drift",
. "dHRF"), scale_BOLD = TRUE, scale_design = TRUE, Bayes = TRUE,
. ar_order = 6, ar_smooth = 5, resamp_res = 10000, num.threads = parallel::detectCores() -
. 2, verbose = TRUE, outfile = outfile, return_INLA_result = TRUE,
. avg_sessions = TRUE, session_names = c("run-1", "run-2",
. "run-3"), meanTol = 0.1, varTol = 1e-06, trim_INLA = TRUE)
2. BayesGLM(data = session_data, beta_names = beta_names, mesh = NULL,
. vertices = surf_list[[each_hem]]$vertices, faces = surf_list[[each_hem]]$faces,
. scale_BOLD = scale_BOLD, scale_design = FALSE, Bayes = do_Bayesian,
. ar_order = ar_order, ar_smooth = ar_smooth, num.threads = num.threads,
. return_INLA_result = return_INLA_result, outfile = outfile_name,
. verbose = verbose, avg_sessions = avg_sessions, meanTol = meanTol,
. varTol = varTol, trim_INLA = trim_INLA)
3. system.time(INLA_result <- estimate_model(formula = formula,
. data = model_data, A = model_data$X, spde, prec_initial = 1,
. num.threads = num.threads, verbose = verbose))
4. estimate_model(formula = formula, data = model_data, A = model_data$X,
. spde, prec_initial = 1, num.threads = num.threads, verbose = verbose)
5. INLA::inla(formula, data = data, control.predictor = list(A = A,
. compute = TRUE), verbose = verbose, keep = FALSE, num.threads = num.threads,
. control.inla = list(strategy = "gaussian", int.strategy = int.strategy),
. control.family = list(hyper = list(prec = list(initial = prec_initial))),
. control.compute = list(config = TRUE), contrasts = contrasts,
. lincomb = lincomb)
6. inla.core(formula = formula, family = family, contrasts = contrasts,
. data = data, quantiles = quantiles, E = E, offset = offset,
. scale = scale, weights = weights, Ntrials = Ntrials, strata = strata,
. lp.scale = lp.scale, link.covariates = link.covariates, verbose = verbose,
. lincomb = lincomb, selection = selection, control.compute = control.compute,
. control.predictor = control.predictor, control.family = control.family,
. control.inla = control.inla, control.fixed = control.fixed,
. control.mode = control.mode, control.expert = control.expert,
. control.hazard = control.hazard, control.lincomb = control.lincomb,
. control.update = control.update, control.lp.scale = control.lp.scale,
. control.pardiso = control.pardiso, only.hyperparam = only.hyperparam,
. inla.call = inla.call, inla.arg = inla.arg, num.threads = num.threads,
. blas.num.threads = blas.num.threads, keep = keep, working.directory = working.directory,
. silent = silent, inla.mode = inla.mode, safe = FALSE, debug = debug,
. .parent.frame = .parent.frame)
7. inla.interpret.formula(formula, data.same.len = data.same.len,
. data = data, data.model = data.model, parent.frame = .parent.frame)
8. eval(parse(text = gsub("^f\\(", "INLA::f(", terms[i])), envir = data,
. enclos = p.env)
9. eval(parse(text = gsub("^f\\(", "INLA::f(", terms[i])), envir = data,
. enclos = p.env)
10. INLA::f(beta1, model = spde, replicate = repl1, hyper = list(theta = list(initial = c(-2,
. 2))))
Timing stopped at: 0.599 0.942 1.64
Do you think this is happening because the code assigning a variable to spde is not surrounded by {}, as other code in if statements are?
That is, should it not be instead if (do_Bayesian) { spde <- INLA::inla.spde2.matern(mesh) }
Hi Steven,
That's unfortunate. I have the code working on my Mac and Linux CentOS, but I guess it could be possible for different operating systems to handle variables differently. The last idea I have is moving the call to INLA::inla
directly inside the BayesGLM function. I just pushed a change to 1.9 which does this. Would you want to try that? If it still doesn't work, we'll have to do further investigation into what's happening.
The brackets are indeed good practice, but the statement should be equivalent with or without.
Also, we are still working on integrating the EM implementation with 1.9.
I will give it a try and update. Would you also mind sharing what build of INLA you are working with?
> packageVersion("INLA")
[1] ‘21.11.22’
Also, I'm actually on Red Hat Enterprise Linux 7, but I believe I installed the most recent Cent OS INLA binaries.
Steven, can you try this version on your Mac? There should be fewer INLA issues on Mac.
On Fri, Jan 28, 2022, 12:23 PM Steven Meisler @.***> wrote:
I will give it a try and update. Would you also mind sharing what build of INLA you are working with?
— Reply to this email directly, view it on GitHub https://github.com/mandymejia/BayesfMRI/issues/17#issuecomment-1024440724, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHVKUVBA5F6SYH3Z7VU2LDUYLGKJANCNFSM5MK5VOBQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID: @.***>
I have been having trouble getting INLA to work at all on my Mac, but I will keep trying. The update to 1.9 did not work for me, but my current INLA version is more recent ‘22.1.19’. I will try rolling back to yours.
It looks like rolling back is making it work (on linux)! It is strange, I was only able to find your version on the stable branch of INLA, which is not supposed to work with R4.1, which is what I have installed. At this point I'm not going to question it. Waiting for my multi-session run to finish.
Yay! Let us know how the results look.
Dan is actively working to add the EM option on the 1.9 branch, so hopefully a completely INLA-free option will be available for you soon.
On Fri, Jan 28, 2022, 3:11 PM Steven Meisler @.***> wrote:
It looks like rolling back is making it work (on linux)! It is strange, I was only able to find your version on the stable branch of INLA, which is not supposed to work with R4.1, which is what I have installed. At this point I'm not going to question it. Waiting for my multi-session run to finish.
— Reply to this email directly, view it on GitHub https://github.com/mandymejia/BayesfMRI/issues/17#issuecomment-1024600992, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHVKUQMNEP4DPUF7EHLUUTUYL2APANCNFSM5MK5VOBQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID: @.***>
After addressing some smaller errors on my end and updating some packages, I got my first outputs! 🎉🎉🎉 I am not familiar with ciftiTools however, and am having trouble plotting my beta coefficients around the surface. Is there a minimal example for doing something like this? I think the simulation vignette in this repository uses older versions of this package and ciftitools, resulting in errors. Thank you so much again for all the hard work in getting this running!
I tried to do plot_BayesGLM_slice(results)
(where results
is a BayesGLM output object), and get the following error:
Error in in_binary_mask[, 2:1]: incorrect number of dimensions
I have also tried plot_slice(results$beta_estimates$
run-1), zlim=c(0,1.5))
and get the following error:
Error in inherits("matrix", x): 'what' must be a character vector
Traceback:
1. plot_slice(results$beta_estimates, zlim = c(0, 1.5))
2. melt_mat2(X)
3. lapply(X_list, melt_mat)
4. FUN(X[[i]], ...)
That's fantastic! Congratulations!
I will let Damon take the lead on advising on ciftiTools. Typically it is as easy as just running plot() on your xifti object. There should be beta coefficients in xifti form returned by BayesGLM.cifti. Can you share the errors you're getting?
On Fri, Jan 28, 2022, 10:16 PM Steven Meisler @.***> wrote:
After addressing some smaller errors on my end and updating some packages, I got my first outputs! 🎉🎉🎉 I am not familiar with ciftiTools however, and am having trouble plotting my beta coefficients around the surface. Is there a minimal example for doing something like this? I think the simulation vignette here uses older versions of this package and ciftitools, resulting in errors. Thank you so much again for all the hard work in getting this running!
— Reply to this email directly, view it on GitHub https://github.com/mandymejia/BayesfMRI/issues/17#issuecomment-1024820271, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHVKURJF4HXK7FMVNNXOT3UYNL2XANCNFSM5MK5VOBQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID: @.***>
plot_BayesGLM_slice(results)
-->Error in in_binary_mask[, 2:1]: incorrect number of dimensions
__
plot_slice(results$beta_estimates$'run-1'), zlim=c(0,1.5))
-->
Error in inherits("matrix", x): 'what' must be a character vector
Traceback:
1. plot_slice(results$beta_estimates, zlim = c(0, 1.5))
2. melt_mat2(X)
3. lapply(X_list, melt_mat)
4. FUN(X[[i]], ...)
__
plot(results)
-->
Error in in_binary_mask[, 2:1]: incorrect number of dimensions
Traceback:
1. plot(results)
2. plot(results)
3. plot.BayesGLM(results)
4. plot_BayesGLM_slice(x, ...)
__
Exciting!
I might have fixed your problem just now (sorry about that!). Try the new version?
If I remember correctly, even if it works, there may be some room for improvement to the plotting functionality. I'll be working on this soon.
Hi,
I am sorry for posting all of these, but I think I must be really close to having an output! I ran into a
Cholmod error 'problem too large' at file ../Core/cholmod_dense.c, line 102
when running 3 sessions at a time, so now I am only running one.Command
Error
Let me know if you think I should post this in INLA instead. My inal.pardiso.check() returned no errors.
Best, Steven