MIC-DKFZ / nnUNet

Apache License 2.0
5.77k stars 1.73k forks source link

Does this code of nnUNet works well? #16

Closed zxyyxzz closed 4 years ago

zxyyxzz commented 5 years ago

Hi FabianIsensee: An incredible and outstanding work!~~~,you are the best researcher of segmentation area,did your still remember me?Xinyu is me. Oh my god, when i read your paper of nnUNet tonight,it makes me feel exciting!~. It will be the strong baseline of unet Does it works now ?

Best Xinyu

zxyyxzz commented 5 years ago

Hi, from my experience the dice loss works, also without the crossentropy part. You should probably give it more time. This is just epoch 8 :-) If you want it to converge faster, set do_bg=True. This will also increase stability. But it may reduce performance a tiny little bit. Best, Fabian

Hi, 1.So, if only use dice loss, does the label 0 need be put into dice loss? It seems like label 0 is put into dice loss while setting the do_bg=True.

2.How many epochs does the nnunet can convergence for your experience?

3.[0.02365254889172594, 0.7808338269560635, 0.8130702233783669] should be label1, label2, label4? or label4, label1, label2? :-) Best, Xinyu

FabianIsensee commented 5 years ago

Hi, if you only use the dice loss then the stability is better if the background is optimized as well. But this is rarely a problem (it is ONLY a problem SOMETIMES in the Task01/BraTS).

.[0.02365254889172594, 0.7808338269560635, 0.8130702233783669]

These are the three foreground classes in whatever order to put them. So whatever has the integer value 1 in the label maps is the first etc.

nnUNet needs ~500 epochs for BraTS-like data.

Best, Fabian

zxyyxzz commented 5 years ago

Hi, if you only use the dice loss then the stability is better if the background is optimized as well. But this is rarely a problem (it is ONLY a problem SOMETIMES in the Task01/BraTS).

.[0.02365254889172594, 0.7808338269560635, 0.8130702233783669]

These are the three foreground classes in whatever order to put them. So whatever has the integer value 1 in the label maps is the first etc.

nnUNet needs ~500 epochs for BraTS-like data.

Best, Fabian

Hi Thanks, 1.But there is a weird problem, i put the feature map channel is label 1, label 2 and label 4, the number of label 4 that is enhancing core is very small, the dice of it should be small than label1 and label 2 in training stage, but the global dice of label4 is 0.81307022, label1 is 0.023, label2 is 0.7808.

[0.02365254889172594, 0.7808338269560635, 0.8130702233783669]

2.I mean that how many epochs could be better than 0.02365 for your experience?Because now is still 0.028 epochs is 16, but if use dice and ce, the dice can rise quickly.

Best Xinyu

FabianIsensee commented 5 years ago

Hi, please be patient. And if the dice score does not rise then use the background label as well. Like I said, the dice loss has issues with brats (and brats only). If you look into the segmentation maps, likely the entire background is labelled 1 by the network and the dice is so low because of all the false positives. I dont know why this happens. That's just how this is.

So you should

1) let it run a day or two and see if the problem solves itself

2) if that does not happen, use do_bg=True

zxyyxzz commented 5 years ago

Hi, please be patient. And if the dice score does not rise then use the background label as well. Like I said, the dice loss has issues with brats (and brats only). If you look into the segmentation maps, likely the entire background is labelled 1 by the network and the dice is so low because of all the false positives. I dont know why this happens. That's just how this is.

So you should

1. let it run a day or two and see if the problem solves itself

2. if that does not happen, use do_bg=True

Hi Thanks for your reply, i will be patient for dice to see if it could rise. Best :-) Xinyu

DecentMakeover commented 5 years ago

I love this thread for some reason

FabianIsensee commented 5 years ago

not sure what you mean

zxyyxzz commented 5 years ago

not sure what you mean

Hi Recently, i meet "mmap length is greater than file size" while i training on Brats2019 dataset, the error is caused by the data that is one of Brats2019 named ''BraTs19_CBICA_A00_1"

Any solutions for the error?

Best Xinyu

FabianIsensee commented 5 years ago

Hi, yes I know this error. Sometimes for whatever reason something is wrong with the npy files. Delete all the npy files (NOT the npz files!) in your preprocessed data folder and run again. Best, Fabian

zxyyxzz commented 5 years ago

Hi, yes I know this error. Sometimes for whatever reason something is wrong with the npy files. Delete all the npy files (NOT the npz files!) in your preprocessed data folder and run again. Best, Fabian

Hi, Fabian: Do you know the difference between '"Ensembleling " and "Ensembling test cases", i konw the "Ensembling test cases" means that average the probability of different models, but "Ensembling" says that figure out what the best combination of models.

Best Xinyu

FabianIsensee commented 5 years ago

Hi, what exactly do you mean? Ensembling is always averaging the probability of models. Best, Fabian

zxyyxzz commented 5 years ago

Hi, what exactly do you mean? Ensembling is always averaging the probability of models. Best, Fabian

Hi Fabian: What is the defference between python evaluation/model_selection/figure_out_what_to_submit.py -t XX and python inference/ensemble_predictions.py -f FOLDER1 FODLER2 ... -o OUTPUT_FOLDER ?

Best, Xinyu

zxyyxzz commented 5 years ago

Hi, what exactly do you mean? Ensembling is always averaging the probability of models. Best, Fabian

Hi Fabian: I mean that what is the defference between python evaluation/model_selection/figure_out_what_to_submit.py -t XX and python inference/ensemble_predictions.py -f FOLDER1 FODLER2 ... -o OUTPUT_FOLDER ?

Best, Xinyu

FabianIsensee commented 5 years ago

One is used to evaluate what ensembles will be used for test set prediction, the other is used to run the ensembling of the test set predictions. This is not ideal - I know. This is just how the code was created back then and I have not touched it since. You should find descriptions in the readme of when to use what. If something is unclear, please let me know. Best, Fabian

JiangZongKang commented 5 years ago

Hi, @FabianIsensee,

Thanks for making the great work publically available. In the BraTS2018 Challenge,i have a puzzle about your post-processing operations. What is the threshold setting?
I would be grateful if you could share the relevant code.

Thanks for your attention. I’m looking forward to your reply. Best, ZongKang

FabianIsensee commented 4 years ago

Hi, BraTS 2018 has nothing to do with this repository, so you will not be able to reproduce the results with this code. The threshold is optimized on the training set cross-validation (just try out some values and pick the one that results in the highest mean dice). Best, Fabian

nuist-xinyu commented 4 years ago

Hi FabianIsensee: An incredible and outstanding work!~~~,you are the best researcher of segmentation area,did your still remember me?Xinyu is me. Oh my god, when i read your paper of nnUNet tonight,it makes me feel exciting!~. It will be the strong baseline of unet Does it works now ?

Best Xinyu

Oh my god, when i read your paper of nnUNet tonight,it makes me feel exciting!~. my name is xinyu too

hesterwolf commented 4 years ago

Hi, When I try to run the setup.py I get this error:

usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help

Do you know what is going on?

FabianIsensee commented 4 years ago

python setup.py install

but why not pip install .?