RuntimeError: Inplace update to inference tensor outside InferenceMode is not allowed.You can make a clone to get a normal tensor before doing inplace update.See https://github.com/pytorch/rfcs/pull/17 for more details. #2262
When I'm running an inference on the CPU I obtain the following error. The model is being deployed on a Docker, with Ubuntu installed and pytorch 2.3.0+cu121.
The full error is this one
File "/opt/conda/bin/nnUNetv2_predict", line 8, in <module>
sys.exit(predict_entry_point())
File "/opt/conda/lib/python3.9/site-packages/nnunetv2/inference/predict_from_raw_data.py", line 864, in predict_entry_point
predictor.predict_from_files(args.i, args.o, save_probabilities=args.save_probabilities,
File "/opt/conda/lib/python3.9/site-packages/nnunetv2/inference/predict_from_raw_data.py", line 256, in predict_from_files
return self.predict_from_data_iterator(data_iterator, save_probabilities, num_processes_segmentation_export)
File "/opt/conda/lib/python3.9/site-packages/nnunetv2/inference/predict_from_raw_data.py", line 373, in predict_from_data_iterator
prediction = self.predict_logits_from_preprocessed_data(data).cpu()
File "/opt/conda/lib/python3.9/site-packages/nnunetv2/inference/predict_from_raw_data.py", line 492, in predict_logits_from_preprocessed_data
prediction += self.predict_sliding_window_return_logits(data).to('cpu')```
I think it might be related with the last line `prediction += self.predict_sliding_window_return_logits(data).`
How can I solve the error?
When I'm running an inference on the CPU I obtain the following error. The model is being deployed on a Docker, with Ubuntu installed and pytorch 2.3.0+cu121.
The full error is this one