Open Lucas-rbnt opened 1 year ago
Thank for your interest in BraTS Toolkit.
"I assume the BraTS server is running locally?" it should 🙈
What happens if you start the nvidia docker hello-world? https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
Hi,
Sorry for the answer delay. I'm using Docker daily so I stuck with my workflow.
I assumed it was possible since NVIDIA docker is a wrapper around but maybe I am wrong here ?
Thanks again for your answer, Lucas.
What happens if you start the nvidia docker hello-world? https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
Hello from Docker!
This message shows that your installation appears to be working correctly.
....
Can you please show the full output, including GPU and cuda version?
"If yes then I guess it's due to the lack of a web server on the computing server, is it possible to disable it and use the Python API only?"
Can you elaborate what you mean here? :)
I'm working on a computing server with no web server, I thought that maybe the problem comes from here ?
You mean the BraTS output ? Because even trying to work with cpu-only mode, I'm still stuck at this part of the process.
Otherwise, The compute server has 4 GeForce 2080 Ti, Cuda 11.6.
"I'm working on a computing server with no web server," Cannot follow you, sorry, please elaborate.
What happens internally: The backend is started in a docker, and it opens a local flask server that is communicating with the python frontend via WebSockets.
We do have another preprocessing pipeline not requiring docker that will be published soon.
Thank you for your answer.
So I guess, my problem might come from the lack of a graphics server on my compute server.
No, BraTS Toolkit can run headless without trouble.
Oh thanks again.
Then I have no idea why it's blocked at this stage.
Are other dockers running on the system, which ports are taken already? Can you show the full output from the hello-world?
https://github.com/neuronflow/BraTS-Toolkit/blob/master/0_preprocessing_single.py
did you confirm processing of the exam? otherwise try setting the confirm parameter to False
.
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
Some ports are already taken of course but not the 5000 dedicated to Flask. No container are running currently on the system.
I tried both cpu and gpu mode and confirm=True
and confirm=False
Is BraTS-Toolkit using a docker image that needs login to be pulled ?
This appears to be the wrong hello world.
What happens if you start the nvidia docker hello-world? https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
Can you elaborate "wrong hello world" ?
I tried my regular Docker installation and I also change my config to do it the nvidia-ctk way. Everything works as expected in their documentation and my outputs (including the hello-world one) match the documentation ones
Please read the link: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
Perhaps the hello world is confusing you, please run with and without sudo:
sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi
and post the output.
Ah yes I thought you wanted the output of the hello-world container ahah, I didn't quite understand why.
It seems to work fine, I'm back on my GPUs
Unable to find image 'nvidia/cuda:11.6.2-base-ubuntu20.04' locally
11.6.2-base-ubuntu20.04: Pulling from nvidia/cuda
[PULLING PROCESS]
Status: Downloaded newer image for nvidia/cuda:11.6.2-base-ubuntu20.04
Tue Feb 14 11:56:02 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.161.03 Driver Version: 470.161.03 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 25% 41C P2 57W / 250W | 1774MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... Off | 00000000:43:00.0 Off | N/A |
| 29% 30C P8 1W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA GeForce ... Off | 00000000:81:00.0 Off | N/A |
| 29% 26C P8 2W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA GeForce ... Off | 00000000:C1:00.0 Off | N/A |
| 29% 26C P8 15W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
Okay, your docker installation seems to be fine. Which data are you trying to process?
What do you see if you type docker ps
?
Trying to process private data. Every sample is a *.nii file.
docker ps
returns my greedy_elephant container running
What happens if you process the example data?
Well I'm sorry, I think I know the origin finally, by rereading your BraTS Toolkit paper, the registration is done on the T1 (and not on the T1ce as on similar tools). In my case I don't have these 4 modalities and I use only two (FLAIR and T1ce) and I put the t1ce file as t1 but I guess the registration doesn't manage to be done because trying the BraTS toolkit on the 4-modalities BraTS data it worked.
Is that the problem?
Yes, very likely.
I have an alternative t1-c centric preprocessing pipeline that can deal with fewer modalities that we can hopefully publish soon.
Yes, sorry to have wasted your time on this issue. Do you have a date for the t1-c centric alternative? We try to harmonise our pre-processing as much as possible and the python API of your tool offers a considerable advantage which makes it a big plus in our processing phases.
No worries.
Would you be interested in investing time and serving as a beta tester? If so, we can set up a call and discuss :)
Yes of course it could be very interesting. Also I would really like to be able to integrate the tool into my Python routine for my research.
@Lucas-rbnt still interested? It would be ready for the first tests now.
Yes I am !
I wrote you on LinkedIn let's coordinate there :)
@Lucas-rbnt please see the post above. Also:
Trying to process private data. Every sample is a *.nii file.
docker ps
returns my greedy_elephant container running
can you try with .nii.gz
files?
Hey, sorry to bother but I have the same problem although I have all the modalities. It hangs at
status received: {'code': 201, 'message': 'input inspection queued!'} status received: {'code': 201, 'message': 'nifti examination queued!'}
I don't know what to do. My files are .nii not .nii.gz.
Until this issue https://github.com/neuronflow/BraTS-Toolkit/issues/18 is closed you need .nii.gz
files. Just renaming is enough, you don't actually need to compress them. You can also try our new preprocessing toolkit which is much more capable and under active development:
https://github.com/BrainLesion/preprocessing
You can use it like this:
https://github.com/BrainLesion/preprocessing/blob/main/example_modality_centric_preprocessor.py
This is way faster than I expected thanks for informing me. I will try the new one. Nevertheless, adding .gz didn't solve the issue. In that issue #18 it was about output. It says no such file or directory after changing nii to nii.gz without doing any compression. I have tried it both from CLI and Python. I am trying to do the preprocessing step only.
Hello, I've been able to identify the problem and fix it, can you share the full output to see if we, indeed, have the same problem?
Hi everyone, I work on a computing server. And by wanting to use single preprocessing I get stuck at:
I assume the BraTS server is running locally? If yes then I guess it's due to the lack of a web server on the computing server, is it possible to disable it and use the Python API only?
Sorry for the inconvenience, Lucas Robinet.