MIC-DKFZ / nnUNet

Apache License 2.0
5.79k stars 1.74k forks source link

RuntimeError: Inplace update to inference tensor outside InferenceMode is not allowed.You can make a clone to get a normal tensor before doing inplace update. #2364

Closed claudiab98 closed 3 months ago

claudiab98 commented 3 months ago

Hello, I installed nnunet and trained it on a server and it worked everything fine and now I want to label images on my local pc and I installed nnunet on my local pc:

@echo off echo Installing nnU-Net environment...

REM Check if Python is installed python --version >nul 2>&1 IF %ERRORLEVEL% NEQ 0 ( echo Python is not installed. echo Please install Python 3.10 from https://www.python.org/downloads/release/python-3100/ echo After installing Python, run this script again. pause exit /b 1 )

REM Create and activate a virtual environment python -m venv nnunet_env call nnunet_env\Scripts\activate

REM Upgrade pip python -m pip install --upgrade pip pip install --upgrade pip pip install batchgenerators

REM Install required packages from requirements.txt pip install "numpy<2" pip3 install torch torchvision torchaudio pip install IPython

REM Install system dependencies (Windows) REM You might need to install additional system dependencies manually or using other package managers REM Example: Install Graphviz REM Ensure Chocolatey is installed before running this command REM choco install graphviz

cd nnunet

pip install nnunetv2 pip install --upgrade git+https://github.com/FabianIsensee/hiddenlayer.git

REM Notify user of successful installation echo nnU-Net environment setup completed successfully. pause

and copied the data folders (rawdata, preprocessed and results) from the server to my local pc and I get the following error and I tried everything but I couldnt solve the problem. Do you have any idea how to solve it?

(nnunet_env) C:\Users\lawre>nnUNetv2_predict -i "C:\Users\lawre\nnunet_env\nnUNet_raw\nnUNet_raw_data\Dataset012_BVSG\ImagesTltest" -o "C:\Users\lawre\nnunet_env\nnUNet_raw\nnUNet_raw_data\Dataset012_BVSG\labeltltest" -tr nnUNetTrainerDA5 -d 12 -c 2d -device cpu

####################################################################### Please cite the following paper when using nnU-Net: Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211. #######################################################################

perform_everything_on_device=True is only supported for cuda devices! Setting this to False There are 3 cases in the source folder I am process 0 out of 1 (max process ID is 0, we start counting with 0!) There are 3 cases that I would like to predict I0711 12:42:05.148000 12712 torch_dynamo\utils.py:320] TorchDynamo compilation metrics: I0711 12:42:05.148000 12712 torch_dynamo\utils.py:320] Function, Runtimes (s) I0711 12:42:05.164000 13384 torch_dynamo\utils.py:320] TorchDynamo compilation metrics: I0711 12:42:05.164000 13384 torch_dynamo\utils.py:320] Function, Runtimes (s) I0711 12:42:05.180000 14112 torch_dynamo\utils.py:320] TorchDynamo compilation metrics: I0711 12:42:05.180000 14112 torch_dynamo\utils.py:320] Function, Runtimes (s)

Predicting BVSG_001: perform_everything_on_device: False 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:10<00:00, 10.25s/it] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:09<00:00, 9.17s/it] Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Users\lawre\nnunet_env\Scripts\nnUNetv2_predict.exe__main__.py", line 7, in File "C:\Users\lawre\nnunet_env\Lib\site-packages\nnunetv2\inference\predict_from_raw_data.py", line 864, in predict_entry_point predictor.predict_from_files(args.i, args.o, save_probabilities=args.save_probabilities, File "C:\Users\lawre\nnunet_env\Lib\site-packages\nnunetv2\inference\predict_from_raw_data.py", line 256, in predict_from_files return self.predict_from_data_iterator(data_iterator, save_probabilities, num_processes_segmentation_export) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\lawre\nnunet_env\Lib\site-packages\nnunetv2\inference\predict_from_raw_data.py", line 373, in predict_from_data_iterator prediction = self.predict_logits_from_preprocessed_data(data).cpu() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\lawre\nnunet_env\Lib\site-packages\nnunetv2\inference\predict_from_raw_data.py", line 492, in predict_logits_from_preprocessed_data prediction += self.predict_sliding_window_return_logits(data).to('cpu') RuntimeError: Inplace update to inference tensor outside InferenceMode is not allowed.You can make a clone to get a normal tensor before doing inplace update.See https://github.com/pytorch/rfcs/pull/17 for more details. I0711 12:42:30.384000 17132 torch_dynamo\utils.py:320] TorchDynamo compilation metrics: I0711 12:42:30.384000 17132 torch_dynamo\utils.py:320] Function, Runtimes (s) I0711 12:42:31.510000 13000 torch_dynamo\utils.py:320] TorchDynamo compilation metrics: I0711 12:42:31.510000 13000 torch_dynamo\utils.py:320] Function, Runtimes (s)

ykirchhoff commented 3 months ago

Hi @claudiab98,

this issue that nnUNet does an inplace update to the prediction tensor during inference. prediction += self.predict_sliding_window_return_logits(data).to('cpu') This works fine on GPU as it moves the output from self.predict_sliding_window to the CPU but leads to problems on CPU, as inplace updates to tensors in inference mode are not allowed. This can be solved by adding .clone() at the end of lines 492 and 494 in this file. Unfortunately this is not a permanent solution for nnUNet as it slows down predictions on GPU. I see that you installed nnUNet from PyPi, so you would need to clone the repository and install it from within your (local) repository as explained in the docu. Let me know if you have any issues with that. In general I would recommend running inference on the GPU as this is significantly faster, so maybe you can do the prediction on the server instead of your local machine.

Best, Yannick

claudiab98 commented 3 months ago

thank you, it worked :)

chris-rapson-formus commented 3 months ago

This is a duplicate of #2193 (and #2262)