Closed amirzolal closed 5 years ago
Just one info: after install by compiling failed (as I mentioned in previous issue here), I have installed from the last binary release.
Anyone here? :)
you have several questions:
installation - we are in the middle of updating with itk version 5 so build fixes will take some time to propagate through the full stack. builds should be stable again in about a month, maybe less.
re: sccanFirstEigenImage
lets just use this example:
im<-antsImageRead( getANTsRData("r64"))
mask<-thresholdImage( im, 250, Inf )
dd = sum( mask == 1 )
mat1<-matrix( rnorm(dd*10) , nrow=10 )
mat2<-matrix( rnorm(dd*10) , nrow=10 )
initlist<-list()
for ( nvecs in 1:2 ) {
init1<-antsImageClone( mask )
init1[ mask == 1 ]<-rnorm( dd )
initlist<-lappend( initlist, init1 )
}
ff<-sparseDecom2( inmatrix=list(mat1,mat2), inmask=list(mask,mask),
sparseness=c(0.1,0.1) ,nvecs=length(initlist) , smooth=1,
cthresh=c(0,0), initializationList = initlist ,ell1 = 11 )
produces:
> dim( ff$eig1 )
[1] 16 2 # these are determined by the number of eigenvectors and size of mat1
> dim(mat1)
[1] 10 16
> length(initlist)
[1] 2
re: memory - you might need to rm some objects - a variety of strategies discussed here: https://stackoverflow.com/questions/11579765/how-to-clean-up-r-memory-without-the-need-to-restart-my-pc ... maybe the tmp files are an issue.
re: "how can I assess that the results I see are meaningful for the data I have," - this is a judgment call that can be assisted by permutation testing, assessing predictive value on an independent task and by visual inspection. pretty standard stuff in inspecting the validity of statistical models. i do not recommend trying to assign significance to individual voxels either by VBM or by looking at the spatial values of SCCAN eigenvectors.
re: very high correlations - seems suspicious. i would look carefully at the data to see why this might be the case, ie scatterplots of the projections.
For future reference: the memory consumption problem was solved by installing the latest github version. Seems there was some leak in the latest binary, at least when installed on CentOS or Ubuntu.
permutation testing means setting perm=100 or similar? how much would be enough in your experience?
Greater than 1000 ... but 100 or even 10 Maybe enough to give you an idea of what’s going on
On Thu, Mar 7, 2019 at 11:36 AM amirzolal notifications@github.com wrote:
permutation testing means setting perm=100 or similar? how much would be enough in your experience?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ANTsX/ANTsR/issues/262#issuecomment-470663931, or mute the thread https://github.com/notifications/unsubscribe-auth/AATyfgg_xqrRoV4SmJqRjb9tqWyBDq3lks5vUWolgaJpZM4a_w6l .
--
brian
The strategy proposed by some (including Winkler 2014) is to try a small number first, i.e., 100, and if p is high stop, but if p is low keep going to reach the lowest possible p-value. With sparseDecom2 this may not work very well because each run is separate, and, for example, I don't think you run 100 permutations in a first run, and then 900 permutations more to reach 1000. I don't think the first 100 would be saved somewhere so you can use them later, but who knows, maybe @stnava knows better.
Dorian
On Thu, Mar 7, 2019 at 3:21 PM stnava notifications@github.com wrote:
Greater than 1000 ... but 100 or even 10 Maybe enough to give you an idea of what’s going on
On Thu, Mar 7, 2019 at 11:36 AM amirzolal notifications@github.com wrote:
permutation testing means setting perm=100 or similar? how much would be enough in your experience?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ANTsX/ANTsR/issues/262#issuecomment-470663931, or mute the thread < https://github.com/notifications/unsubscribe-auth/AATyfgg_xqrRoV4SmJqRjb9tqWyBDq3lks5vUWolgaJpZM4a_w6l
.
--
brian
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ANTsX/ANTsR/issues/262#issuecomment-470679038, or mute the thread https://github.com/notifications/unsubscribe-auth/AIqafd15sgkQCb3_9bSAzEiiUXq3M20zks5vUXSwgaJpZM4a_w6l .
Sparsedecom2 Provides the necessary output to support multiple runs
Briefly, you just add the raw significance counts over the runs.
Or average the P values if You keep the number of permutations constant across runs
On Thu, Mar 7, 2019 at 1:28 PM dorianps notifications@github.com wrote:
The strategy proposed by some (including Winkler 2014) is to try a small number first, i.e., 100, and if p is high stop, but if p is low keep going to reach the lowest possible p-value. With sparseDecom2 this may not work very well because each run is separate, and, for example, I don't think you run 100 permutations in a first run, and then 900 permutations more to reach 1000. I don't think the first 100 would be saved somewhere so you can use them later, but who knows, maybe @stnava knows better.
Dorian
On Thu, Mar 7, 2019 at 3:21 PM stnava notifications@github.com wrote:
Greater than 1000 ... but 100 or even 10 Maybe enough to give you an idea of what’s going on
On Thu, Mar 7, 2019 at 11:36 AM amirzolal notifications@github.com wrote:
permutation testing means setting perm=100 or similar? how much would be enough in your experience?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ANTsX/ANTsR/issues/262#issuecomment-470663931, or mute the thread <
.
--
brian
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ANTsX/ANTsR/issues/262#issuecomment-470679038, or mute the thread < https://github.com/notifications/unsubscribe-auth/AIqafd15sgkQCb3_9bSAzEiiUXq3M20zks5vUXSwgaJpZM4a_w6l
.
— You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/ANTsX/ANTsR/issues/262#issuecomment-470701698, or mute the thread https://github.com/notifications/unsubscribe-auth/AATyfiZ4-ip-9a8xfoa_Tace_eKZgtdkks5vUYSKgaJpZM4a_w6l .
--
brian
closing as i think we've covered the questions here. please open a new more focused issue if you have more questions.
Hi,
I was wondering what data was in the resulting image sccanFirstEigenImage <- matrixToImages( t( ageSccanResults$eig1 ),grayMatterMask )[[2]], as opposed to [[1]], mentioned in the tutorial?
Further, I was trying to do this:
with length=10 as mentioned in the tutorial but the process fills out the 16 GB RAM and 32 GB of swap space and ended with bad_alloc error. This is why I added the gc(), but it did not really help (no effect at all). I have tested without the for loop, and free -m will report more and more memory occupied with each run, although gc() values stay the same. Is there any other way to optimize? I do not understand what the data stay in memory for each run of the for loop, I would expect the data to be released from memory after assigning new results to the same data name.
Just one more question - how can I assess that the results I see are meaningful for the data I have, for instance, I will compare FAs of ca 15 subjects with normosmy to 15 patients with congenital anosmy, coded as 0 and 1 as you suggested, how do I know that the eigenvectors displayed are "significant" in some way, although this is surely not the right way to describe it, perhaps you know what I mean.
Thanks in advance for the answer,
Amir
PS: And another question - I get correlations of 1 for all 5 sparseness values between 0.01 and 0.1 in the comparison (normosmy vs congenital anosmy), is it possible that the structural differences are so large that the correlation is 1? The resulting image [[1]] is anatomically very feasible in some cases, but sometimes I am not sure. But maybe I am doing something wrong.
PS2: Thanks Nick for pointing me over to this forum.