Closed hitersyw closed 1 year ago
Hi,
For training and testing, 4-fold cross-validation on the training dataset is used. Data augmentation is done offline on all of the training images and kept it under "data_path". For the test image, I used the original training image and saved it under "data_path_test". (Note: This is done to avoid testing on augmented images. ) .
For example on Exp1, for training videos (2; 4; 5; 6; 7; 8) and test videos (1; 3), remove all the images of video (2; 4; 5; 6; 7; 8) from the "data_path_test" and remove all the images of video (1;3) from the "data_path" . The network is then trained with all the remaining frames and tested on video1 and video3.
Can you please understand it clearly? I am still not able to catch it.
The dataset is shared as part of MICCAI 2017 challenge and we don't have the right to redistribute it. All the data could be downloaded from https://endovissub2017-roboticinstrumentsegmentation.grand-challenge.org/Downloads/ after registration.
Offline augmentation, is mainly done for making the dataset balanced depending on the class distribution. As long as you have a similar representation for the different classes, it should be okay.
could you please tell me how to build the train datasets?
I use the origin Endovis 2017 datasets, but it seems that the datasets do not fit the train datasets in the R_Roboscene.
Thanks!