I found in the files the code for training but could you please elaborate in detail on how to run it? In particular:
• What were the training procedure and default hyperparameters?
• The paper mentioned that the model performs inter-domain augmentation. Should be dataset divided by domains for the training? What was your dataset structure for Camelyon17?
• Which dataset size do you recommend to train on (~ number of patches)?
I used the original training procedure including the hyper parameters and this worked fine for me.
You're right, the training and test data should be divided into domains. On Camelyon17, I did that according to the hospitals. I sorted the patches from WSIs from the 5 clinics into folders trainA to trainE, same for test WSIs. So all patches from one domain are in one folder.
It works surprisingly well for small data. We made a study on that for nuclei segmentation (https://doi.org/10.3390/jimaging8030071) on the DSB18 Kaggle challenge were the domains were very imbalanced and had only 8-588 training images. For training on Camelyon, I had indeed a lot more patches, roughly 250.000 per domain.
Hi,
I found in the files the code for training but could you please elaborate in detail on how to run it? In particular:
• What were the training procedure and default hyperparameters? • The paper mentioned that the model performs inter-domain augmentation. Should be dataset divided by domains for the training? What was your dataset structure for Camelyon17? • Which dataset size do you recommend to train on (~ number of patches)?
Thanks!