Closed insanem3rlin closed 2 months ago
we are working on this here https://github.com/MS-PINPOINT/mindGlide/pull/24 should be resolved soon
is there a possibility to use mindglide while this is resolved?
until the issue is resolved you can run this command instead:
docker run -it --rm --gpus all -w /mindGlide -v $(pwd):/mnt mspinpoint/mindglide:may2024 {name_of_nifti_file}
maybe a small adjustment for windows users: the "$(pwd):/mnt" has to be in quotation marks, otherwise docker doesnt recognize the command and answers with "invalid reference format". I do not know how it behaves on linux, so i guess this is just a windows thing... Also make sure that you use this command in a with admin rights elevated powershell instance as sometimes the regular command line does not know the $(pwd) command.
So for me this worked: docker run -it --rm --gpus all -w /mindglide -v "$(pwd):/mnt" mspinpoint/mindglide:may2024 {name_of_nifti_file}
Hi,
I've tried running the temporary fix posted above but keep getting the following:
Traceback (most recent call last):
File "/opt/mindGlide/mindGlide/run_inference.py", line 230, in
Thanks in advance for any suggestions :)
Your error looks like Docker does not have the correct permissions. Can you check if the Docker container has writing permissions in the directory you are mounting?
Another way to solve permission issues would be to create a /tmp/mindglide
directory on your local machine, copy the nifti-file to it and then try to run the Docker container again:
docker run -it --rm --gpus all -w /mindglide -v "/tmp/mindglide:/mnt" mspinpoint/mindglide:may2024 {name_of_nifti_file}
Hi @phiphi0815, many thanks for your quick reply! Creating a /tmp/mindglide directory didn't work for me, as the /tmp folder has root permissions, but I set up a folder in /home/christine/Documents instead and made sure it had all the necessary user permissions. I am trying to run mindGlide like so:
docker run -it --rm -w /mindGlide -u 1000:1000 -v /home/christine/Documents/mindglide_tmp:/mnt mspinpoint/mindglide:may2024 /mnt/
Kind regards, Christine
Hi @ChristineFarrugia, may I ask what terminal you are using? Docker containers can behave very strangely if you run them in certain terminals.
If you are on a Windows machine, try using PowerShell.
The following steps worked for me:
My nifti-file is called test_flair.nii.gz.
mkdir -p C:\DockerTest\mindglide\
cp .\test_flair.nii.gz C:\DockerTest\mindglide\
docker run -it --rm --gpus all -v "C:/DockerTest/mindglide:/mnt" mspinpoint/mindglide:may2024 test_flair.nii.gz
The Permission denied: './runs_12_fold0__mindglide
error might also arise if the container cannot read your nifti-file properly.
In the code you shared docker run -it --rm -w /mindGlide -u 1000:1000 -v /home/christine/Documents/mindglide_tmp:/mnt mspinpoint/mindglide:may2024 /mnt/
you are not providing a nifti-file but just the /mnt/ directory at the end of the line.
If you are still experiencing permission issues, it may be caused by a more general problem:
Let me know if any of these suggestions are helpful.
Best wishes, Philipp
Hi Philipp,
Sorry for the sparse information I gave you earlier.
docker run -it -w /mindGlide -u 1000:1000 -v /home/christine/Documents/mindglide_tmp:/mnt mspinpoint/mindglide:may2024 /mnt/test.nii.gz
/home/christine/Documents/mindglide_tmp
has execute permissions all the way down and full rwx
permissions on itselfpython /opt/mindGlide/mindGlide/run_inference.py --model_file_paths /opt/mindGlide/models/_20240404_conjurer_trained_dice_7733.pt --scan_path /mnt/test.nii.gz
/mnt folder content: ['test.nii.gz']
model_file_paths: ['/opt/mindGlide/models/_20240404_conjurer_trained_dice_7733.pt']
model_paths: ['/opt/mindGlide/models/_20240404_conjurer_trained_dice_7733.pt']
scan to segment: /mnt/test.nii.gz
python /opt/monai-tutorials/modules/dynunet_pipeline//inference.py -fold 0 -expr_name _mindglide -task_id 12 -tta_val False --root_dir /mnt/tmpMINDGLIDEuvSkGeQx8P --datalist_path /mnt/tmpMINDGLIDEuvSkGeQx8P --checkpoint /opt/mindGlide/models/_20240404_conjurer_trained_dice_7733.pt
Output:
Error: Traceback (most recent call last):
File "/opt/monai-tutorials/modules/dynunet_pipeline//inference.py", line 204, in <module>
inference(args)
File "/opt/monai-tutorials/modules/dynunet_pipeline//inference.py", line 41, in inference
os.makedirs(infer_output_dir)
File "/opt/conda/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/opt/conda/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: './runs_12_fold0__mindglide'
Traceback (most recent call last):
File "/opt/mindGlide/mindGlide/run_inference.py", line 230, in <module>
main(args)
File "/opt/mindGlide/mindGlide/run_inference.py", line 133, in main
raise Exception("output file does not exist: ", output_file)
Exception: ('output file does not exist: ', 'runs_12_fold0__mindglide/Task12_brain/test.nii.gz')
rwx
for my user and r-x
otherwise.So, I assume that the problem already occurs when the software tries to run that inference script, which fails with a permission denied
error when trying to create the sub-folders in the tmp workdir and does not generate the necessary output files (this, then, causes the second exception in the main script).
But I have no idea why the internally called script fails to create its sub-folder (given that it should still run with the same uid/gid that I provide when launching docker, right?). Any ideas?
Could you try to run
docker run -it --rm --gpus all -v "/home/christine/Documents/mindglide_tmp:/mnt" mspinpoint/mindglide:may2024 test.nii.gz
And let me know if the output is different?
Hi Philipp,
This appears to solve the issue - many thanks for your help! I am now encountering problems due to having no GPUs on my desktop. I tried to force PyTorch to use the CPU, like so:
docker run -it --rm -e CUDA_VISIBLE_DEVICES="" -u 1000:1000 --cpus=5 -v "/home/christine/Documents/mindglide_tmp:/mnt" mspinpoint/mindglide:may2024 test.nii.gz
Here is the stack trace I got:
Traceback (most recent call last):
File "/opt/monai-tutorials/modules/dynunet_pipeline//inference.py", line 204, in <module>
inference(args)
File "/opt/monai-tutorials/modules/dynunet_pipeline//inference.py", line 52, in inference
net = get_network(properties, task_id, val_output_dir, checkpoint)
File "/opt/monai-tutorials/modules/dynunet_pipeline/create_network.py", line 72, in get_network
net.load_state_dict(torch.load(pretrain_path))
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 712, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 1046, in _load
result = unpickler.load()
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 1016, in persistent_load
load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 1001, in load_tensor
wrap_storage=restore_location(storage, location),
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 176, in default_restore_location
result = fn(storage, location)
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 152, in _cuda_deserialize
device = validate_cuda_device(location)
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 136, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Traceback (most recent call last):
File "/opt/mindGlide/mindGlide/run_inference.py", line 230, in <module>
main(args)
File "/opt/mindGlide/mindGlide/run_inference.py", line 133, in main
raise Exception("output file does not exist: ", output_file)
Exception: ('output file does not exist: ', 'runs_12_fold0__mindglide/Task12_brain/test.nii.gz')
Can mindGlide be run on CPU only, by any chance?
We do not recommend running without GPU and it has not been tested , although some users have reported they have successfully run it with CPU, we have never tried in our lab.
Hello,
in the Read.Me file you have declared that i should execute the command:
docker run -it --rm -v $(pwd):/mindGlide -w /mindGlide armaneshaghi/ms-pinpoint/mind-glide:latest {name_of_nifti_file}
But when running this command i get the following exception:
docker: Error response from daemon: pull access denied for armaneshaghi/ms-pinpoint/mind-glide, repository does not exist or may require 'docker login': denied: requested access to the resource is denied. See 'docker run --help'.
For me it seems as the name of the image is wrong, as there is an "armaneshagi/mind-glide" image on docker hub as well as a "mspinpoint/mindglide" image. Which image would be the correct one, or do i really need special access to the resource as the exception above says?
Thanks in advance for your help!