Closed Kaihua1203 closed 1 year ago
by the way, there is my command: CUDA_VISIBLE_DEVICES=1 nnUNetv2_train 14 3d_fullres 1 --npz
thanks!
@TaWald
Hey,
this is likely a duplicate of #1044.
If the steps there do not fix your issue, please reopen this thread here :) Cheers, Tassilo
I have the same issue with the evaluate_predictions. The solution in #1044 suggests that we should add the --verify_dataset_integrity flag. But the issue is not at the preprocessing step, from my understanding. My images and labels have matching shapes. The issue occurs at the prediction/postprocessing step. Is it possible that nnunet crops some of the predicted images so that the ground truth's and the predictions' shapes don't match ? How can we solve that ?
Hi, something wrong with the
evaluate_predictions
after training. it is all traceback below: multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/multiprocessing/pool.py", line 51, in starmapstar return list(itertools.starmap(args[0], args[1])) File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/site-packages/nnunetv2/evaluation/evaluate_predictions.py", line 107, in compute_metrics tp, fp, fn, tn = compute_tp_fp_fn_tn(mask_ref, mask_pred, ignore_mask) File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/site-packages/nnunetv2/evaluation/evaluate_predictions.py", line 82, in compute_tp_fp_fn_tn tp = np.sum((mask_ref & mask_pred) & use_mask) ValueError: operands could not be broadcast together with shapes (1,95,320,278) (1,99,512,476) """The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/jupyter-wkh/.conda/envs/nnunet/bin/nnUNetv2_train", line 8, in
sys.exit(run_training_entry())
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/site-packages/nnunetv2/run/run_training.py", line 252, in run_training_entry
run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights,
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/site-packages/nnunetv2/run/run_training.py", line 197, in run_training
nnunet_trainer.perform_actual_validation(export_validation_probabilities)
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 1189, in perform_actual_validation
metrics = compute_metrics_on_folder(join(self.preprocessed_dataset_folder_base, 'gt_segmentations'),
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/site-packages/nnunetv2/evaluation/evaluate_predictions.py", line 145, in compute_metrics_on_folder
results = pool.starmap(
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/multiprocessing/pool.py", line 375, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/multiprocessing/pool.py", line 774, in get
raise self._value
ValueError: operands could not be broadcast together with shapes (1,95,320,278) (1,99,512,476)
Exception in thread Thread-5 (results_loop):
Traceback (most recent call last):
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, *self._kwargs)
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 125, in results_loop
raise e
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 103, in results_loop
raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
Exception in thread Thread-4 (results_loop):
Traceback (most recent call last):
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/threading.py", line 953, in run
self._target(self._args, **self._kwargs)
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 125, in results_loop
raise e
File "/home/jupyter-wkh/.conda/envs/nnunet/lib/python3.10/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 103, in results_loop
raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message