Open kkirtac opened 6 years ago
Hi @kkirtac , the .list file is a text file, the content is the form of " ISIC_0000000.jpg ISIC_0000000_Segmentation.png", both training part and testing part are this form, i have tried ,it's right. the image of 480*480 is got by deconvolution, rather resized, i hope this can help you.
Okay, it seems that feeding the network with cropped images (without resizing to a canonical size) is working good. But I still do not understand at which point we do get 480x480. The output size of the network should be the same with the input size.
Hi @yulequan @muyulin ,
I want to understand if every cropped lesion is resized to 480x480 before being fed into the network. crop_size
is set to 480
in training prototxt. How does this operate during training?
In an earlier post, Yu has noted that no resampling is applied in segmentation network. But training prototxt sets mirror: true
training prototxt also sets mirror: true
and crop_size: 480
for validation set. Do you apply same preprocessing steps (lesion cropping and resizing) to validation samples? What is your percentage of training/validation splitting? I assume no resizing or cropping is applied in testing phase (to test samples released by challenge organizers), so using validation samples without resizing or cropping makes much more sense?
hi @kkirtac i think that author had resized the image to 480 before the image was sent to network, only the 480 is matched with the training prototxt, if possible, i hope the @yulequan give the explicit detail explanation
Hi, @kkirtac ,@muyulin. In the segmentation task, we don't do any resampling operations. For each training image, we first find the bounding box from the annotation ground truth. If the bounding box is smaller than 480*480, then we enlarge the bounding box. After that, we crop the bounding box from the training image as subimage (You can see this subimage mainly contains the skin lesion). In order to include the background, we also crop another same size subimage randomly from the whole image (It contains the background).
In summary, we crop two subimages from one training images. These subimages are in the .lis file. When training the network, we use the caffe data layer to randomly crop 480480 patches from these subimages as network input. In the testing phase, the network input size is also 480480. We use sliding window strategy to tile these sub segmentation results.
Btw, the cropped subimages are a little larger than the annotation bounding boxes.
Thank you @yulequan @muyulin , it is much more clear now.
I understand from caffe datatransformer that in test phase (on validation samples) central crop and random mirroring is applied.
In the testing phase, the network input size is also 480480. We use sliding window strategy to tile these sub segmentation results.
Just 2 questions about this:
Besides all,
Thanks.
Hi @yulequan ,
I forget the specific overlap ration. Yes, we use the simple averaging of the probabilities.
Hi @yulequan ,
I have implemented the same segmentation pipeline using keras-tf. I left %10 random portion of my training data as validation data. Then I performed the same method during preparing training samples as you explained. I finally come up with 1399 training (including the resampled background images) and 90 validation samples.
I am experiencing overfitting issues. Please see my validation error when I fine tune only final layers versus fine tuning all layers as your training prototxt suggests. During fine tuning final layers, I skipped multi-scale feature aggregation, and just performed deconvolution on the output of the final convolution layer with stride 32. How did you overcome overfitting while fine tuning all layers?
Hi @https://github.com/yulequan
Can you please share with us ,how you augmented the testing and training data?
hi @yulequan how do you get bounding box from the annotation ground truth
Hi, @kkirtac ,@muyulin. In the segmentation task, we don't do any resampling operations. For each training image, we first find the bounding box from the annotation ground truth. If the bounding box is smaller than 480*480, then we enlarge the bounding box. After that, we crop the bounding box from the training image as subimage (You can see this subimage mainly contains the skin lesion). In order to include the background, we also crop another same size subimage randomly from the whole image (It contains the background).
In summary, we crop two subimages from one training images. These subimages are in the .lis file. When training the network, we use the caffe data layer to randomly crop 480_480 patches from these subimages as network input. In the testing phase, the network input size is also 480_480. We use sliding window strategy to tile these sub segmentation results.
Btw, the cropped subimages are a little larger than the annotation bounding boxes.
hi @yulequan how do you find the bounding box from the annotation ground truth? there are only mask files in the ground truth zip file, isn't it? do you add bounding box with the labeling tool by yourself ?
@yulequan @kkirtac I find ”ignore_label: 255” in the prototxt,so the mask files should be binarized。 am I right?
Hi @yulequan ,
I am trying to reproduce your segmentation results.
I want to understand what specifically you have in your input
.list
file. Do you have file paths like ISIC_0000000.jpg ISIC_0000000_Segmentation.png (after cropping with respect to segmentation mask, then resizing to 480x480) at each row of the file? Can you give an example of one row from your.list
file?thanks in advance.