bids-apps / MRtrix3_connectome

Generate subject connectomes from raw BIDS data & perform inter-subject connection density normalisation, using the MRtrix3 software package.
http://www.mrtrix.org/
Apache License 2.0
49 stars 26 forks source link

New commer. Can you help me to find the scratch folder. I need the recon all results. Thanks! #125

Open ZhyGao opened 1 year ago

ZhyGao commented 1 year ago

I have run the docker by the command with preproc mode. And I noticed that it generated a scratch folder. How can I store it or find it? I need the results in it. Thanks very much.

Lestropie commented 1 year ago

Hi @ZhyGao,

I think currently the only way to retain the contents of the scratch directory when using a container environment is to include the --debug flag; that will move the scratch directory to the mounted output directory, and prevent its erasure upon completion.

If access to intermediate data is necessary, then a single-button-press application may not be the best fit for you. For things like recon-all results (which isn't part of the preproc analysis level, not sure the source of that discrepancy), personally I'd rather have FreeSurfer run in its own App, and then an App like this would load the derivatives of that App and utilise them, rather than running FreeSurfer internally within its own container. But I think the domain is still some way away from properly standardising how such things will work in the BIDS ecosystem.

Rob

ZhyGao commented 1 year ago

Thanks for your reply. I have stored the scratch folder. And I have any other questions.

  1. I found a script which is written 3 years before. In this script, they generated the sub-XXX(participant_label)_parcXXX(parcellation)"level-participant_connectome.csv&meanlength.csv". But now I generated the sub-XXX(participant_label)_descXXX(parcellation)"connectome.csv&meanlength.csv". Are they the same just changing the name with the update?
  2. Similar to sub-${participant_label}/anat/sub-${participant_label}_parc-${parcellation}_indices.nii.gz. But in my anat folder. I have these files. 1685291894844
  3. And also the desc_desikan_lookup.txt. I need the par-deskin_lookup.txt.

I am unsure what the difference between "parc" and "desc" is. how can I generate the "parc" files? Waiting for your help, thanks very much.

Lestropie commented 1 year ago
  1. Are you describing data that were generated using a prior version of this script? I expect that it is likely just a name update, but I don't get to play with this very often so I don't recall the full change history very well any more. For participant level connectomes there's nothing major that comes to mind. It would be preferable for the software version to be added as a suffix to the derivatives directory name, and for run levels to then be sub-directories within that (at least I think that's supposed to be the structure), in which case you'd know what version of the App was used to generate each. Unfortunately it looks like I don't currently write the version number anywhere else in metadata.
  2. This is almost certainly just a name change. I first wrote this App just after BIDS became a thing, well before BIDS derivatives. I only transitioned to dseg once it became part of the specification.
  3. The lookup table does not vary across subjects, so from memory at some point I started exporting it to the derivatives root directory (if it doesn't yet exist) rather than writing the same thing in every subject directory.
ZhyGao commented 1 year ago

Hello, Lestropie. Here's a new question. Can you help me? I process a test data like this in dwi file, And it works. 1686392208826 But when I use my real data. There's a error making the script stoped. The data list in dwi and the error information are as below. 1686392352566 1686392490594

Lestropie commented 1 year ago

This error has been reported many times; unfortunately one issue with reporting errors via screenshots is that they are not text searchable by others.

I can only elaborate on the error message generated. As per the README page, the ability to correct EPI susceptibility distortions in the input data is requisite, due to subsequent use of the ACT framework. This App currently only supports correction of those distortions through the FSL topup method, which relies on having spin-echo EPI images (which can come either from DWI b=0 images or the fmap/ directory) with phase encoding contrast: different phase encoding directions and/or readout times. In the absence of such data, EPI susceptibility distortion correction can't be performed, and so the whole process aborts.

Using an alternative solution for EPI distortion correction for processing datasets where such image information is absent would be nice (see #81, #101). Unfortunately this App has fallen well down my priority list, and TBH its design does not lend itself particularly well to such extensibility.