woodywff / brats_2019

A 3D U-Net Based Solution to BraTS 2019 in Keras
https://arxiv.org/abs/1909.12901
GNU General Public License v3.0
49 stars 15 forks source link

Training dataset dice score lower than validition dataset #6

Open Shawn0099 opened 4 years ago

Shawn0099 commented 4 years ago

hi, I trained your network for about 100 epoch. My training loss is lower than validition loss,but dice_coefficient for training dataset is lower than vadlition dataset.I also drew boxplot using evaluate.py,and my validition result is almost same as the result in your paper,but the training dataset result is worse.Is there anything wrong with my settings? Another question is about calculating hausdorff distence.I've calculated sensitivity and speci city and my result is close to yours,but hausdorff distence is much higher than normal(about 20).I used SimpleITK.HausdorffDistanceImageFilter.Could you tell me how you do this? Many thanks!

woodywff commented 4 years ago

The second answer comes first :-) I didn't calculate the four metrics (dice, hausdorff, sensitivity and specificity) by myself. CBICA's evaluation system would take care of that. You just need to Sign up, Log in, and upload your prediction results, then you'll get the metrics feedback.
brats_2019/demo_task1/draw_evaluation.py will plot the figs based on the downloaded .csv files.

When you said training and validation dataset, how many subjects have you taken into account? The final training is on the whole 335 subjects, and the validation is for the 125 subjects. Also notice that I mentioned two patching strategies in the article and for each of them the network has been trained for 100 epochs. That's what came to my mind right now. Good luck to you! Keep in touch :-)