MIC-DKFZ / nnUNet

Apache License 2.0
5.79k stars 1.74k forks source link

python files #396

Closed jiangyuhan666 closed 3 years ago

jiangyuhan666 commented 3 years ago

Hi @FabianIsensee : I see that your nnunet has been configured into Pip. We can enter a command similar to nnUNet_train when running. If I want to know the python file that is executed by this command, how can I check which python file it calls? Looking forward to your answer.

Best, Jiang

FabianIsensee commented 3 years ago

Have a look here: https://github.com/MIC-DKFZ/nnUNet/blob/master/setup.py Best, Fabian

jiangyuhan666 commented 3 years ago

Thanks a lot,and I'm facing another problem: When I run nnUNet_plan_and_preprocess image

It has been stuck here and can not execute. When I use top to view the terminal, I did not find that this process is using cpu.

FabianIsensee commented 3 years ago

https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/common_problems_and_solutions.md#nnu-net-gets-stuck-during-preprocessing-training-or-inference

jiangyuhan666 commented 3 years ago

In the process of model inference, I have set both num_threads_preprocessing and both to 1, but I use top to view my process during the operation, and I found that the nnu process will still execute for a while and then disappear, and then the program will be stuck there , and my RAM is around 64G, This is the command I run:

nnUNet_predict -i xxx/xxx/imagesTS/ -o xxx/xxx/xxx -t xx -m 3d_fullres -f 4 -num_threads_preprocessing 1-chk model_best

FabianIsensee commented 3 years ago

Yeah you are still running out of memory. Is all your ram used up shortly before the nnunet process disappears? What size are your images and how many classes do you have?

jiangyuhan666 commented 3 years ago

I use free -m to check it, and it shows that I have used around 20G, and 40G free , I'm sure that I have enough RAM and GPU. and what's more If I use tuple to return the output of the net, It sames that when I'm predicting the test images, I should change tuple to tensor before softmax, I'm not sure if it is a bug. the dataset is KITS19 512512n and patch to 128*128.

FabianIsensee commented 3 years ago

The problem is not in the prediction. It is when exporting the segmentation. That can take a lot of RAM and if I remember correctly 64GB is not enough for that. This is one of the rare cases where you may want to create a swap drive (64GB) located on an ssd. Then it will work. You can also try --mode fast or --mode fastest in nnUNet_predict but this will reduce the quality of the output

jiangyuhan666 commented 3 years ago

Okay, thanks a lot! By the way, if you use --fast or --fastest, in general, how many percentage points will dice decrease?

FabianIsensee commented 3 years ago

I don't know. I have never tested that. The difference is in the segmentation export. If you use 3d_fullres the drop in Dice will not be big. If you use 3d_lowres there can be a larger drop. Depends on the structures you are segmenting