Open KumudhaNarayana opened 1 year ago
Hi @KumudhaNarayana . Did you already manage to solve the issue?
Hi Saikat Roy,
No, I haven't resolved it yet. Do you have any solution or suggestion for me on how to solve this problem?
On Wed, Sep 20, 2023 at 4:12 AM Saikat Roy @.***> wrote:
Hi @KumudhaNarayana https://github.com/KumudhaNarayana . Did you already manage to solve the issue?
— Reply to this email directly, view it on GitHub https://github.com/MIC-DKFZ/nnUNet/issues/1639#issuecomment-1727198135, or unsubscribe https://github.com/notifications/unsubscribe-auth/AWZ2E62OXMK6IRXDSZKH4ZDX3KQORANCNFSM6AAAAAA3WBC76E . You are receiving this because you were mentioned.Message ID: @.***>
Hello,
I have been using nnU-Net for a while, and this is the first time I encountered this error. Please see the message below:
Traceback (most recent call last): File "C:\Users\Anaconda3\lib\site-packages\batchgenerators\dataloading\multi_threaded_augmenter.py", line 48, in producer item = transform(item) File "C:\Users\Anaconda3\lib\site-packages\batchgenerators\transforms\abstract_transforms.py", line 88, in call data_dict = t(data_dict) File "C:\Users\Anaconda3\lib\site-packages\batchgenerators\transforms\spatial_transforms.py", line 346, in call ret_val = augment_spatial(data, seg, patch_size=patch_size, File "C:\Users\Anaconda3\lib\site-packages\batchgenerators\augmentations\spatial_transformations.py", line 200, in augment_spatial seg_result = np.zeros((seg.shape[0], seg.shape[1], patch_size[0], patch_size[1], patch_size[2]), IndexError: index 1 is out of bounds for axis 0 with size 1 Traceback (most recent call last): File "C:\Users\Anaconda3\Scripts\nnUNet_train-script.py", line 33, in
sys.exit(load_entry_point('nnunet==1.7.0', 'console_scripts', 'nnUNet_train')())
File "C:\Users\Anaconda3\lib\site-packages\nnunet\run\run_training.py", line 177, in main
trainer.run_training()
File "C:\Users\Anaconda3\lib\site-packages\nnunet\training\network_training\nnUNetTrainerV2.py", line 442, in run_training
ret = super().run_training()
File "C:\Users\Anaconda3\lib\site-packages\nnunet\training\network_training\nnUNetTrainer.py", line 316, in run_training
super(nnUNetTrainer, self).run_training()
File "C:\Users\Anaconda3\lib\site-packages\nnunet\training\network_training\network_trainer.py", line 417, in runtraining
= self.tr_gen.next()
File "C:\Users\Anaconda3\lib\site-packages\batchgenerators\dataloading\multi_threaded_augmenter.py", line 182, in next
return self.next()
File "C:\Users\Anaconda3\lib\site-packages\batchgenerators\dataloading\multi_threaded_augmenter.py", line 206, in next
item = self.get_next_item()
File "C:\Users\Anaconda3\lib\site-packages\batchgenerators\dataloading\multi_threaded_augmenter.py", line 190, in get_next_item
raise RuntimeError("MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of "
RuntimeError: MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of your workers crashed. This is not the actual error message! Look further up your stdout to see what caused the error. Please also check whether your RAM was full
I have tried training this before, and it worked. The only change this time was I added more images to the dataset. It doesn't work on some images. I am figuring out how to fix this error, but I have no clue. Please let me know if you have any suggestions for fixing this.
I appreciate any help you can provide.