Open Fan-dentist opened 1 week ago
Hi there!
You seem to be having memory issues during the nnUNet processing on Windows 11. Based on the logs, the error Unable to allocate 2.20 GiB for an array
suggests that your system might be running out of RAM.
Reduce the number of workers: Try reducing the number of processes for segmentation and preprocessing. You can do this by passing--npp
(number of preprocessing workers) and --nps
(number of segmentation workers) as lower values, like 1.
Use a smaller batch size or resolution: If the dataset is large, try lowering the resolution or batch size. This can help reduce the memory footprint during processing.
Increase RAM or use a swap file: Increasing your RAM or configuring a larger swap file could help.
Check for background processes: Ensure that no other applications use a lot of memory, which might free up more RAM for nnUNet.
Review logs for other warnings: Some warnings also concern old nnUNet plans, so ensure your model and architecture configurations are updated.
Let me know if you need further assistance.
I am having trouble shoot problems in windows 11, could you help me [Python] Failed to load the segmentation. [Python] Something went wrong during the nnUNet processing. [Python] Please check the logs for potential errors and contact the library maintainers.
2024/10/09 12:55:51.980 :: nnUNet is already installed (2.5.1) and compatible with requested version (nnunetv2). 2024/10/09 12:55:54.078 :: Transferring volume to nnUNet in C:/Users/jb/AppData/Local/Temp/Slicer-TcKeDo 2024/10/09 12:57:01.759 :: Starting nnUNet with the following parameters: 2024/10/09 12:57:01.759 :: 2024/10/09 12:57:01.759 :: F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Scripts\nnUNetv2_predict.exe -i C:/Users/jb/AppData/Local/Temp/Slicer-TcKeDo/input -o C:/Users/jb/AppData/Local/Temp/Slicer-TcKeDo/output -d Dataset111_453CT -tr nnUNetTrainer -p nnUNetPlans -c 3d_fullres -f 0 -npp 1 -nps 1 -step_size 0.5 -device cuda -chk checkpoint_final.pth --disable_tta 2024/10/09 12:57:01.759 :: 2024/10/09 12:57:01.759 :: JSON parameters : 2024/10/09 12:57:01.759 :: { 2024/10/09 12:57:01.759 :: "folds": "0", 2024/10/09 12:57:01.759 :: "device": "cuda", 2024/10/09 12:57:01.759 :: "stepSize": 0.5, 2024/10/09 12:57:01.759 :: "disableTta": true, 2024/10/09 12:57:01.759 :: "nProcessPreprocessing": 1, 2024/10/09 12:57:01.759 :: "nProcessSegmentationExport": 1, 2024/10/09 12:57:01.759 :: "checkPointName": "", 2024/10/09 12:57:01.759 :: "modelPath": { 2024/10/09 12:57:01.759 :: "_path": "F:\3Dslicer\Slicer 5.7.0-2024-10-05\slicer.org\Extensions-33047\DentalSegmentator\lib\Slicer-5.7\qt-scripted-modules\Resources\ML" 2024/10/09 12:57:01.759 :: } 2024/10/09 12:57:01.759 :: } 2024/10/09 12:57:01.788 :: nnUNet preprocessing... 2024/10/09 12:57:09.267 :: F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\inference\predict_from_raw_data.py:84: FutureWarning: You are using
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\multiprocessing\process.py", line 315, in _bootstrap
2024/10/09 12:57:37.781 :: self.run()
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\multiprocessing\process.py", line 108, in run
2024/10/09 12:57:37.781 :: self._target(*self._args, **self._kwargs)
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\inference\data_iterators.py", line 58, in preprocess_fromfiles_save_to_queue
2024/10/09 12:57:37.781 :: raise e
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\inference\data_iterators.py", line 31, in preprocess_fromfiles_save_to_queue
2024/10/09 12:57:37.781 :: data, seg, data_properties = preprocessor.run_case(list_of_lists[idx],
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\preprocessing\preprocessors\default_preprocessor.py", line 139, in run_case
2024/10/09 12:57:37.781 :: data, seg = self.run_case_npy(data, seg, data_properties, plans_manager, configuration_manager,
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\preprocessing\preprocessors\default_preprocessor.py", line 84, in run_case_npy
2024/10/09 12:57:37.781 :: data = configuration_manager.resampling_fn_data(data, new_shape, original_spacing, target_spacing)
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\preprocessing\resampling\default_resampling.py", line 111, in resample_data_or_seg_to_shape
2024/10/09 12:57:37.781 :: data_reshaped = resample_data_or_seg(data, new_shape, is_seg, axis, order, do_separate_z, order_z=order_z)
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\preprocessing\resampling\default_resampling.py", line 144, in resample_data_or_seg
2024/10/09 12:57:37.781 :: data = data.astype(float, copy=False)
2024/10/09 12:57:37.781 :: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 2.20 GiB for an array with shape (1, 672, 678, 648) and data type float64
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\inference\predict_from_raw_data.py", line 866, in predict_entry_point
2024/10/09 12:57:37.781 :: predictor.predict_from_files(args.i, args.o, save_probabilities=args.save_probabilities,
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\inference\predict_from_raw_data.py", line 258, in predict_from_files
2024/10/09 12:57:37.781 :: return self.predict_from_data_iterator(data_iterator, save_probabilities, num_processes_segmentation_export)
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\inference\predict_from_raw_data.py", line 351, in predict_from_data_iterator
2024/10/09 12:57:37.781 :: for preprocessed in data_iterator:
2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\inference\data_iterators.py", line 111, in preprocessing_iterator_fromfiles
2024/10/09 12:57:37.781 :: raise RuntimeError('Background workers died. Look for the error message further up! If there is '
2024/10/09 12:57:37.781 :: RuntimeError: Background workers died. Look for the error message further up! If there is none then your RAM was full and the worker was killed by the OS. Use fewer workers or get more RAM in that case!
2024/10/09 12:57:38.255 :: #######################################################################
2024/10/09 12:57:38.255 :: Please cite the following paper when using nnU-Net:
2024/10/09 12:57:38.255 :: Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
2024/10/09 12:57:38.255 :: #######################################################################
2024/10/09 12:57:38.255 ::
2024/10/09 12:57:38.255 :: There are 1 cases in the source folder
2024/10/09 12:57:38.255 :: I am process 0 out of 1 (max process ID is 0, we start counting with 0!)
2024/10/09 12:57:38.255 :: There are 1 cases that I would like to predict
2024/10/09 12:57:41.729 :: Loading inference results...
2024/10/09 12:59:12.916 :: Error loading results :
2024/10/09 12:59:12.916 :: Failed to load the segmentation.
2024/10/09 12:59:12.916 :: Something went wrong during the nnUNet processing.
2024/10/09 12:59:12.916 :: Please check the logs for potential errors and contact the library maintainers
torch.load
withweights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value forweights_only
will be flipped toTrue
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user viatorch.serialization.add_safe_globals
. We recommend you start settingweights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. 2024/10/09 12:57:09.267 :: checkpoint = torch.load(join(model_training_outputdir, f'fold{f}', checkpoint_name), 2024/10/09 12:57:09.634 :: F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\site-packages\nnunetv2\utilities\plans_handling\plans_handler.py:37: UserWarning: Detected old nnU-Net plans format. Attempting to reconstruct network architecture parameters. If this fails, rerun nnUNetv2_plan_experiment for your dataset. If you use a custom architecture, please downgrade nnU-Net to the version you implemented this or update your implementation + plans. 2024/10/09 12:57:09.634 :: warnings.warn("Detected old nnU-Net plans format. Attempting to reconstruct network architecture " 2024/10/09 12:57:37.695 :: Process SpawnProcess-16: 2024/10/09 12:57:37.781 :: Traceback (most recent call last): 2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\runpy.py", line 197, in _run_module_as_main 2024/10/09 12:57:37.781 :: Traceback (most recent call last): 2024/10/09 12:57:37.781 :: return _run_code(code, main_globals, None, 2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Lib\runpy.py", line 87, in _run_code 2024/10/09 12:57:37.781 :: exec(code, run_globals) 2024/10/09 12:57:37.781 :: File "F:\3Dslicer\Slicer 5.7.0-2024-10-05\lib\Python\Scripts\nnUNetv2_predict.exe__main__.py", line 7, in