Closed bermanmaxim closed 7 years ago
Hi Maxim,
I had a look again to paper [17] and I found this:
In the case of IBSR, we split the available data into three sets. Each time, we use two of the sets as training data (approximately 100K training samples) and the third set as test data.
So, I guess that, although they did not explicitly say they employed k-fold validation, they did. However, if as you say they use 10 subjects for training in the given example, repeating this 3 times does not make any sense since IBSR has 18 subjects. For this, best thing you can do is directly ask them (I contacted one year ago to Stavros Tsogkas and he is very responsive).
For our evaluation, I do not remember exactly the distribution of the 6 folds since it was like 8 months ago we did it, but I think it was something very easy, something like:
I hope it helps.
Regards.
Alright, thank you for your comment on this. I wanted to make sure I had properly understood your evaluation setting. The folds you mention make sense. I am in contact with the authors of [17] so I think I will get this point clarified. You are right that the use of the mention "each time" seems to point to a cross-validation setting.
Hello,
I have a question regarding your work (3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study) and the results on the IBSR segmentation dataset.
In 2.5.1., you mention:
However, if I look at the mentioned work [17], I see a different validation methodology: they don't mention cross validation folds. Looking at their associated code, I see they use examples 1:10 for training, examples 11:12 for validation and examples 13:18 for testing.
I would be interested to know the cross-validation folds that you have used in order to replicate your evaluation settings.
Thank you