Closed 0rC0 closed 3 years ago
Unfortunately, this is not reproducible with sample data. Can you make nu.mgz
available to us? Thanks
I cannot post it on GitHub, can I maybe have an email adress or similar to send you a link to download it? Thanks
No problem. Can you send it through our FTP file drop and let me know what you've saved it as?
Thanks!
The file is saved as nu_Issue822.mgz
under transfer/incoming
Thanks! But looking at the file nu_Issue822.mgz
, it has zero bytes, are you sure it fully uploaded?
Thanks! But looking at the file
nu_Issue822.mgz
, it has zero bytes, are you sure it fully uploaded?
Ops... sorry
now the file should be there:
ftp> cd transfer/incoming
250 Directory successfully changed.
ftp> put nu_Issue822.mgz
local: nu_Issue822.mgz remote: nu_Issue822.mgz
227 Entering Passive Mode (132,183,240,105,148,2).
150 Ok to send data.
226 Transfer complete.
6349847 bytes sent in 1.64 secs (3.6860 MB/s)
ftp>
Thanks. What's happening here is the normalization is struggling to fit a window to white matter intensities, and it eventually gives up. Has your recon-all input image been processed at all? It looks like it has been - maybe nonlocal filtering? If not, do you mind sending the input data as well?
We are analyzing 7 Tesla MP2RAGE data. If we give to recon-all the UNI T1w image, it has often problems due the "salt-pepper"-noise, with the denoised UNI T1w recon-all gives a worse segmentation due the worse contrast of the brain after denoising.
We make the brain extraction of the UNI image with SPM12 three-tissue-segmentation (it makes also a bias-field-correction with it), then we apply the brain from the UNI on the denoised and we give the resulting image to recon-all (there should be a thread from the beginning of 2017 in the mailing list about it).
We had this problem only for two of 120 subjects.
Tomorrow I can upload the original images.
I've uploaded also the input data:
-) UNI_Issue822.nii.gz
and UNIDEN_Issue822.nii.gz
are the starting points for brain and background respectively.
-) UNIbrain_DENbg_Issue822.nii.gz
is the file I convert to mgz
to be given as input to recon_all
Thaks and if you need something else, just tell me :-)
Sorry for the delay here. We tested things out a bit, and honestly, I think it's going to be tough to get FS to comply with data that's preprocessed like this, but you might have some luck reaching out to the FS mailing list. It's possible someone else will have a good solution. Closing this as generally, we want to reserve the github issues page for specific bugs. Although, I've put in a few checks to make sure the original error message is actually informative.
Platform details:
Summary
During
recon1
, duringmri_normalize
,No such file or directory
is returned. The T1w looks good, no artifacts or defects are visible.The
building Voronoi diagram...
, takes almost no time, while normally it takes a few tens of seconds. The problem should be somewhere near that step.Bug already reported in mailing list last June 8th, without solution.
Actual behavior
Expected behavior
How to replicate the behavior
mri_normalize -g 1 -seed 1234 -mprage -noconform nu.mgz T1.mgz
recon-all.log