Project-MONAI / research-contributions

Implementations of recent research prototypes/demonstrations using MONAI.
https://monai.io/
Apache License 2.0
1.02k stars 334 forks source link

Inconsistence with the results between the tutorial in MONAI and the UNETR paper #30

Closed Puzzled-Hui closed 2 years ago

Puzzled-Hui commented 2 years ago

Dear authors!

Thanks for your great work and it is insighting for me !

However, I have found an inconsistency with the results between the tutorial in MONAI and the UNETR paper.

In the tutorial and with my own re-implentation, the results is about 0.79 while is 0.89 in the paper.

What's wrong with that, I have found the paper claimed that other training cases are introduced to 80 volumes, is that the main reason?

ahatamiz commented 2 years ago

Hi @Puzzled-Hui

Thanks for the comments and questions. There are some differences that cause the discrepancies in the numbers you mentioned. Firstly, our goal in the tutorial was to relax the memory constrains and making it more accessible for users. For this purpose, the UNETR model is trained on a different resolution which is 1.5 1.5 2 as opposed to 1 1 1. Secondly, we only trained on 1 fold without any extensive data augmentation. For the paper, we utilized a five-fold cross validation scheme and used ensemble of 10 different models, from two different splits, for test server submission. Also noting that, the 0.79 performance comes from training on a single fold as outlined in the tutorial and research contribution repository ( link to json file)

In addition, as you are aware, BTCV server has 2 different submission tracks, namely standard and free competition. For the free competition, it is customary to use extra training data as done by previous works that are listed on the challenge leaderboard. Considering this, we also increased the dataset size to 80 to be comparable with these approaches and again utilized five fold cross validation for test server submission.

I hope these explanations are useful.

Thanks

Puzzled-Hui commented 2 years ago

Hi,

I am very appreciate for your reply, and I have figured out this.

Thanks again for your great work and your kindful reply!!!

Merry Christmas in advance:)

ahatamiz commented 2 years ago

Hi @Puzzled-Hui

Thanks for the comments and compliments.

Happy Holidays :)

overbestfitting commented 2 years ago

Thanks so much for the reply. May I know where the additional 50 dataset you get to make the training to 80 data?

overbestfitting commented 2 years ago

Thanks so much for the reply. May I know where the additional 50 dataset you get to make the training to 80 data?

@ahatamiz