Open mmaaz60 opened 2 years ago
Hi @mmaaz60
Thanks for the interest in our work. For the leaderboard benchmarks, an ensemble of 20 models were used. In addition, following previous SOTA on this leaderboard, such as PaNN, we used extra private training data (80 volumes total). We have discussed this in section 4.3 of the UNETR paper (which uses same settings).
Best,
Hello @ahatamiz,
Thank you for your amazing work! Your results are super inspiring! I would like to test some my thoughts on neural net architecture modifications, but I wanna be confident in my experiments. Do you have results on test set with no extra data? And is this extra data publicly available? It is pretty difficult to try outperform your SOTA using almost triple time smaller dataset.
Best regards,
Hi, i would also be interested in the additional data or at least the results without additional data. Is there a chance to share this with us? Thank you.
Best
Hi Authors,
I wonder if you can provide the detailed instructions (and pretrained models as well ideally) to reproduce the results reported in the paper (0.918 avg. dice score). Thanks