-
File "/home/mukbad/anaconda3/envs/mypyt/lib/python3.9/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 48, in producer
item = transform(**item)
File "/home/mukbad…
-
I installed all the packages and tried to run the code. However, I have an error when the code calls to default_configuration.py.
I processed the data using the nnUNet guide. In this step, nnUNet usu…
-
Hello, authors!
I want to know where can I control the num_epochs and batch size, since I have not found it in the command.
-
### Description
Different models for the same `sct_deepseg` task might require different input image orientations.
For example, the default model for `-task seg_spinal_rootlets_t2w` expects LPI (s…
-
Hello, sorry to bother you again. I think the above error may be due to the incorrect json file, which results in the data not being successfully imported and unable to run. I would like to ask if you…
7tu updated
12 months ago
-
Related to brats22/nnunet/pytorch
**the bug**
`orig_lbl = load_data(self.data_path, "*_orig_lbl.npy") `
the line above which is line 51 in nnUNet/data_loading/data_module.py
is producing an as…
-
Hi,
Thank you for these loss functions, they are very helpful. I am trying to run with nnUNetv2 and I can't seem to find the nnUNetTrainerV2.py in this repository and I can't seem to find it. Is th…
-
Hi,
I have previously completed preprocessing, training and prediction and everything was working fine.
I have now created a new folder structure and restarted using different data. For some reaso…
-
Hi :)
i tried training nnUNet with the skeletall recall loss on my dataset und an error has occured. This is listed below. The same dataset was also used on default nnunet itself and there was no …
-
Hello,
I have trained nnUNet on a PNET dataset. Everything works well, but there are two issues when I test the trained model on a validation dataset:
1. Some tumors are not detected at all
2. …