ahme0307 / streoscene

StreoScenNet: Surgical Stereo Robotic Scene segmentation
BSD 2-Clause "Simplified" License
6 stars 3 forks source link

hello #1

Closed hitersyw closed 1 year ago

hitersyw commented 4 years ago

could you please tell me how to build the train datasets?

I use the origin Endovis 2017 datasets, but it seems that the datasets do not fit the train datasets in the R_Roboscene.

Thanks!

ahme0307 commented 4 years ago

Hi,

For training and testing, 4-fold cross-validation on the training dataset is used. Data augmentation is done offline on all of the training images and kept it under "data_path". For the test image, I used the original training image and saved it under "data_path_test". (Note: This is done to avoid testing on augmented images. ) .

For example on Exp1, for training videos (2; 4; 5; 6; 7; 8) and test videos (1; 3), remove all the images of video (2; 4; 5; 6; 7; 8) from the "data_path_test" and remove all the images of video (1;3) from the "data_path" . The network is then trained with all the remaining frames and tested on video1 and video3.

vrk7 commented 1 year ago

Can you please understand it clearly? I am still not able to catch it.

ahmedkmSintef commented 1 year ago

The dataset is shared as part of MICCAI 2017 challenge and we don't have the right to redistribute it. All the data could be downloaded from https://endovissub2017-roboticinstrumentsegmentation.grand-challenge.org/Downloads/ after registration.

Offline augmentation, is mainly done for making the dataset balanced depending on the class distribution. As long as you have a similar representation for the different classes, it should be okay.