Closed steve-seagal closed 2 years ago
Thanks for the interests on the work. The repository is build as a tutorial for multi-organ segmentation using public available data and easy-to-access GPU resources such as single GPU training. The paper performance is trained using Distributed training, tuned parameters, more than 20 models ensemble, and with collaborated additional data (80 in total for training and validation). These configurations are different from single GPU, single model, 30 public available BTCV data in the tutorial. Thanks!
Dice score is 0.918 in the paper and 0.8225/0.8186 in the repository. Why there is a huge gap in between? Are they evaluated on validation set or test set? And what is overlap parameter of score of 0.918 in the paper?
Thank you.