spisakt / pTFCE

Probabilistic Threshold-Free Cluster Enhancement of Neuroimages
https://spisakt.github.io/pTFCE/
BSD 3-Clause "New" or "Revised" License
38 stars 6 forks source link

NaN's produced. #8

Open maitra opened 4 years ago

maitra commented 4 years ago

Bstand2-int-death-negative.nii.gz mask.nii.gz

The files above, which are of Zmaps produce NaNs for 307 voxels. Here is what I do:


Z <- readNIfTI(fname = "Bstand2-int-death-negative")
MASK <- readNIfTI(fname = "mask.nii.gz")
pTFCE <- ptfce(Z, MASK)

I get:

* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
There were 50 or more warnings (use warnings() to see the first 50)

And:

warnings()
Warning messages:
1: In sqrt(d * (8 * s + d)) : NaNs produced
2: In sqrt(d * (8 * s + d)) : NaNs produced
3: In sqrt(d * (8 * s + d)) : NaNs produced
4: In sqrt(d * (8 * s + d)) : NaNs produced
5: In sqrt(d * (8 * s + d)) : NaNs produced
6: In sqrt(d * (8 * s + d)) : NaNs produced
7: In sqrt(d * (8 * s + d)) : NaNs produced
....

What is wrong? How do I get out of these NaNs?

spisakt commented 4 years ago

Hi, Thanks a lot for the bug report! There was an underflow for your volume (related to the recent performance patch), which has a somewhat low smoothness.

I have fixed it, please download version 0.2.2 and see if it solves the issue.

Cheers, Tamas

maitra commented 4 years ago

Thank you. For this example, it of course solves the issue. We will try with other datasets also and let you know if we come across issues. Thank you again for fixing this so soon.

maitra commented 4 years ago

Sorry, I spoke too soon. The problem still exists. Please try: pTFCE <- ptfce(-Z, MASK)

and the underflow has not gone away. How do I get out of these NaNs?

spisakt commented 4 years ago

Ohh, I see. I increased the new scaling factor. Does it work for you with v0.2.2.1?

Please let me know if you have further issues. I might have to search for a more elaborated fix...

maitra commented 4 years ago

Ohh, I see. I increased the new scaling factor. Does it work for you with v0.2.2.1?

Please let me know if you have further issues. I might have to search for a more elaborated fix...

So, of course, it works for this particular dataset but I get around 18.5\% voxels activated inside the mask when I use 2-sided thresholding, which is hard to believe. Can you please see if there is a more correct fix? Btw, here is the wrapper function I used.

ptfce.image <- function(infile, maskfile, one.sided = F, alpha = 0.05) {
    if (require(oro.nifti)) {
    Z <- readNIfTI(fname = infile)
    MASK <- readNIfTI(fname = maskfile)
    if (require(pTFCE)) {
        if (one.sided) {
            pTFCE <- ptfce(img = Z, mask = MASK)
            Zcut <- fwe.p2z(pTFCE$number_of_resels, alpha)
            pTFCE$Z[pTFCE$Z < Zcut]  <- 0
        }
        else {
            pTFCE <- ptfce(img = Z, mask = MASK)
            Zcut <- fwe.p2z(pTFCE$number_of_resels, alpha/2)
            pTFCE$Z[pTFCE$Z < Zcut]  <- 0
            pTFCE1 <-  ptfce(img = -Z, mask = MASK)
            Zcut <- fwe.p2z(pTFCE1$number_of_resels, alpha/2)
            pTFCE1$Z[pTFCE1$Z < Zcut]  <- 0
            pTFCE$Z[pTFCE1$Z != 0]  <- -pTFCE1$Z[pTFCE1$Z != 0]
        }
        cat("Proportion of activated voxels:", mean(pTFCE$Z[MASK==1]!=0),"\n")
        pTFCE$Z
    }   else
        cat("Could not load package pTFCE: is it correctly installed?\n")
    } else
        cat("Could not load package oro.nifti: is it correctly installed?\n")
}

For the file as above:

Z <- ptfce.image(infile = "Bstand2-int-death-negative", mask = "mask")
* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
Proportion of activated voxels: 0.1813664 

For a different file (attached): I get 24.6%, which is simply not possible.


Z <- ptfce.image(infile = "Bstand2-int-death-positive", mask = "mask")
* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
Proportion of activated voxels: 0.245529 

I guess that I will wait for your elaborate fix.

Bstand2-pTFCE-int-death-positive.nii.gz

spisakt commented 4 years ago

Are you sure these are Z-score images? If yes, then the high number of detected voxels looks not so unreasonable, given the distribution of your unenhnaced Z-scores:

image

Red is the standard normal, blue is your data. Your distribution is clearly much different from the null, you can expect a lot of "unlikely" clusters of activations to emerge.

maitra commented 4 years ago

The images are of voxel-wise standardized values under the null. The normal density in your figure is likely based on the assumption of independence of voxels (I am guessing). I am not completely sure that these are comparable. But it is also possible that these have thicker tails under the null and are better modeled by the t with smaller degrees of freedom. I guess that your assumptions of normality under the null are very rigid.

spisakt commented 4 years ago

Yep, the plot is with independence assumed. But note that positive dependence (which we expect) will make the tails lighter (that's what we take advantage of when doing Gaussian Random Field (GRF) theory-based FWER, BTW). So the figure nicely illustrates, that either (i) GRF is not a good null model for your data or (ii) we can reject the null in lot of voxels in your data, as pTFCE does.

It's not clear how you standardised the values, but if they are T-values, definitely "Gaussianize" them with the proper dof, in order to be valid for pTFCE or any other GRF-based method.

Yes, assumptions are kind of rigid for all GRF-based methods (i.e. for the majority of parametric thresholding methods in neuroimaging). This can lead to problems and we should be very carful, see e.g. https://www.pnas.org/content/113/28/7900 and commentaries.

spisakt commented 4 years ago

The strange distribution might also explain why the underflow (and thus, the NaN-issue) did not happen before.

maitra commented 2 years ago

This bug is not quite resolved. Here is an example. example.rda.gz Please unzip the file before using.


load('example.rda')
library(pTFCE)
library(oro.nifti)
mask.nifti  <- as.nifti(mask) # since pTFCE only takes nifti
tstats.nifti  <- as.nifti(tv) # since pTFCE only takes nifti
xx <- ptfce(img = tstats.nifti, mask = mask.nifti)

oro.nifti 0.11.2
* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
There were 50 or more warnings (use warnings() to see the first 50)

warnings()
Warning messages:
1: In sqrt(d * (8 * s + d)) : NaNs produced
2: In sqrt(d * (8 * s + d)) : NaNs produced

We get around 5000+ NaNs. Thanks!

spisakt commented 2 years ago

Thanks, I'll have a look as soon a possible.