Closed dnkennedy closed 2 months ago
@dnkennedy the images look over-cropped. This should have been fixed with the latest updates, but clearly doesn't seem to be working for this data. It's not immediately apparent to me why that is, but I have a couple of ideas. It would be awesome if you could share the input and/or output data with me so I can take a closer look and do some troubleshooting on our end, but I understand if that's not be permissible without a DUA
From the logs, it looks like the work dir contains old outputs (eg Skipping FSL flirt command because its output image(s) listed below exist(s)
), so you could try wiping the outputs and re-running as a sanity check in case these intermediate outputs were generated by a prior version (you could also try the --overwrite
flag, but I think it's buggy (see #44 ), so probably safer to just start fresh). We could also work on providing test data soon for an additional sanity check (issue #135)
I'll look at the info you've provided more closely in the next few days, but let me know if you're able to share the data for this subject!
@LuciMoore thanks for looking at this. I cleared the workdir, and reran. Same error (I think) but without the "skipping FLIRT" notifications. The centering of the prebidsnet looks better. (snapshots included below). I'm looking into the sharing status for this data...
The centering looks better now.
@dnkennedy would you mind editing your message (ellipsis in the top right) and wrapping all that BIBSnet output in triple backticks? i.e. like
``` WARNING: underlay of /etc/localtime required more than 50 (113) bind mounts WARNING: underlay of /usr/bin/nvidia-smi required more than 50 (496) bind mounts
...
For your input files at the path below, check their filenames and visually inspect them if needed. /workdir/bibsnet/sub-BB03601/ses-1/input ```
Which will make it easier to read π like below:
WARNING: underlay of /etc/localtime required more than 50 (113) bind mounts
WARNING: underlay of /usr/bin/nvidia-smi required more than 50 (496) bind mounts
...
For your input files at the path below, check their filenames and visually inspect them if needed.
/workdir/bibsnet/sub-BB03601/ses-1/input
@scott-huberty I tried the triple tics AND a trick Yarik taught me, to make it collapsible... Let me know if that is too much.
@scott-huberty I tried the triple tics AND a trick Yarik taught me, to make it collapsible... Let me know if that is too much.
Nope I Love the Details element π
@dnkennedy It looks like the overcropping was resolved, so that's good. Unfortunately nnUNet_predict is a bit of a black box to me without more info (unless this is another error caused by the configuration of the container). Perhaps it's running out of memory? What kind of computing resources are you using? We recently updated the documentation to specify:
When running BIBSnet using a GPU, the job typically requires about 45 minutes, 20 tasks, and one node with 40 GB of memory. However, we have also had success running BIBSNet on a CPU with 40 GB of RAM.
Let me know if you're able to share the input/output data or some part of it. I'll see if @tjhendrickson and @paul-reiners (who developed/trained the nnUNet model used by BIBSNet) have thoughts on other mechanisms for troubleshooting
Thanks, @LuciMoore . I'm running using GPU on my university cluster. I'll double check the memory allocation and make sure it's 40GB.
@dnkennedy what I usually do at this point is run BIBsnet with some test file to determine if the issue is with my specific file or with the image/system etc. Sorry for the long code but this is what I did when I was having issues with BIBSnet:
# This assumes you are NOT using a windows OS.
# Create a virtual environment in the 'my_env' directory (if you don't already have a virtual environment)
python3 -m venv my_env
# Activate the virtual environment (Unix-based systems)
source my_env/bin/activate
# Install openneuro within the virtual environment
python3 -m pip install openneuro-py
# Start a python session
python3
# Python
from pathlib import Path
import openneuro
# Create the directory for the dataset
Path("./ds004776").mkdir()
# Download the dataset to the d004776 directory, using openneuro
openneuro.download(dataset="ds004776", target_dir="./ds004776", include="sub-01")
# Close the python session
exit()
Run your singularity command as usual, adjusting the paths to this new test file
singularity run --nv --cleanenv --no-home \
-B /path/to/ds004776:/input \
-B /path/to/derivatives:/output \
/path/to/bibsnet.sif \
/input /output participant \
-participant 01
If you do this, let us know if the pipeline successfully runs with this file!
@dnkennedy can you try running with -d
instead of -v
? hopefully that will give more useful information from nnUNet!
@scott-huberty @LuciMoore the test worked beautifully...
will work on the sharing of my example data...
my data is from Philips (should have mentioned earlier), and was BIDSified via heudiconv. I have a couple (well 3 each) of T1s and T2s, for what thats worth.
tree . βββ ses-1 βββ anat βΒ Β βββ sub-BB036_ses-1_acq-MRPAGE_run-10_T1w.json βΒ Β βββ sub-BB036_ses-1_acq-MRPAGE_run-10_T1w.nii.gz βΒ Β βββ sub-BB036_ses-1_acq-MRPAGE_run-14_T1w.json βΒ Β βββ sub-BB036_ses-1_acq-MRPAGE_run-14_T1w.nii.gz βΒ Β βββ sub-BB036_ses-1_acq-MRPAGE_run-21_T1w.json βΒ Β βββ sub-BB036_ses-1_acq-MRPAGE_run-21_T1w.nii.gz βΒ Β βββ sub-BB036_ses-1_run-11_T2w.json βΒ Β βββ sub-BB036_ses-1_run-11_T2w.nii.gz βΒ Β βββ sub-BB036_ses-1_run-12_T2w.json βΒ Β βββ sub-BB036_ses-1_run-12_T2w.nii.gz βΒ Β βββ sub-BB036_ses-1_run-13_T2w.json βΒ Β βββ sub-BB036_ses-1_run-13_T2w.nii.gz βΒ Β βββ sub-BB036_ses-1_run-20_T2w.json βΒ Β βββ sub-BB036_ses-1_run-20_T2w.nii.gz
@dnkennedy awesome! Ok so yes, this must have something to do with the data. are there any notable differences in the metadata for the input files created from your data vs the test data (especially orientation and coordination system info)? was the test data converted with heudiconv as well?
was the test data converted with heudiconv as well?
@LuciMoore I think the data are already BIDS compliant. It's just the infant freesurfer test subject that is on OpenNeuro.
@LuciMoore @scott-huberty some progress. Prompted by the sample data success, I ripped down my data, which had 3 T1s and 4 T2s (or so) to just 1 of each, and it ran ok, I guess.
So I'm now restoring the multiple T1s and T2s to see if that's what introduced the problem. Will report back if that fails with the "-d" output.
Well, I can not break it any more; after the successful test run, as describer by @scott-huberty above, my runs on a single T1/T2 and multi T1s/T2s; and in the context of my complete study are all operating as expected. I guess the only thing I'm doing a little differently, since I am using a 'working directory' is making sure I'm using an fresh empty directory, and not 'contaminated' with any prior (failed) runs.
So, I guess we can close this, and I'll try to break it some other ways...
Awesome @dnkennedy
If I had a nickel for every time that something magically started working again.. Glad things are working for now!
@dnkennedy Glad to hear itβs running now! Up to this point our team has largely focused on the model training and refinement of preprocessing steps, but the code base itself would certainly benefit from some polish, so keep breaking away! Iβve linked this to an existing issue for our future reference
What happened?
Thanks for fixing my prior "Could not find a task with ID" issue. My issue has now 'migrated'.
I'm running version 3.4.2 as singularity on my University HPC: "> singularity pull bibsnet-3.4.2.sif docker://dcanumn/bibsnet:release-3.4.2"
It proceeds for awhile, and completes with an error: "ERROR 2024-09-10 23:28:28,339: nnUNet failed to complete, exitcode -9 Error: Output segmentation file not created at the path below during nnUNet_predict run. /workdir/bibsnet/sub-BB03601/ses-1/output For your input files at the path below, check their filenames and visually inspect them if needed. /workdir/bibsnet/sub-BB03601/ses-1/input"
What command did you use?
What version of BIBSnet are you using?
3.4.2
Directory Structure
No response
Relevant log output
Add any additional information or context about the problem here.
It seems to have finished the prebibsnet stage. In the bibsnet part of the workingdir, input, I have two images, sub-BB03601_ses-1_optimal_resized_0000.nii.gz sub-BB03601_ses-1_optimal_resized_0001.nii.gz
I've attempted to add a snapshot of each.
The centering of these seem, let's say, unusual...