yulequan / melanoma-recognition

Repository of paper "Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks"
http://www.cse.cuhk.edu.hk/~lqyu/skin/
50 stars 22 forks source link

training segmentation network #5

Open kkirtac opened 6 years ago

kkirtac commented 6 years ago

Hi @yulequan ,

I am trying to reproduce your segmentation results.

I want to understand what specifically you have in your input .list file. Do you have file paths like ISIC_0000000.jpg ISIC_0000000_Segmentation.png (after cropping with respect to segmentation mask, then resizing to 480x480) at each row of the file? Can you give an example of one row from your .list file?

thanks in advance.

muyulin commented 6 years ago

Hi @kkirtac , the .list file is a text file, the content is the form of " ISIC_0000000.jpg ISIC_0000000_Segmentation.png", both training part and testing part are this form, i have tried ,it's right. the image of 480*480 is got by deconvolution, rather resized, i hope this can help you.

kkirtac commented 6 years ago

Okay, it seems that feeding the network with cropped images (without resizing to a canonical size) is working good. But I still do not understand at which point we do get 480x480. The output size of the network should be the same with the input size.

kkirtac commented 5 years ago

Hi @yulequan @muyulin ,

muyulin commented 5 years ago

hi @kkirtac i think that author had resized the image to 480 before the image was sent to network, only the 480 is matched with the training prototxt, if possible, i hope the @yulequan give the explicit detail explanation

yulequan commented 5 years ago

Hi, @kkirtac ,@muyulin. In the segmentation task, we don't do any resampling operations. For each training image, we first find the bounding box from the annotation ground truth. If the bounding box is smaller than 480*480, then we enlarge the bounding box. After that, we crop the bounding box from the training image as subimage (You can see this subimage mainly contains the skin lesion). In order to include the background, we also crop another same size subimage randomly from the whole image (It contains the background).

In summary, we crop two subimages from one training images. These subimages are in the .lis file. When training the network, we use the caffe data layer to randomly crop 480480 patches from these subimages as network input. In the testing phase, the network input size is also 480480. We use sliding window strategy to tile these sub segmentation results.

Btw, the cropped subimages are a little larger than the annotation bounding boxes.

kkirtac commented 5 years ago

Thank you @yulequan @muyulin , it is much more clear now.

I understand from caffe datatransformer that in test phase (on validation samples) central crop and random mirroring is applied.

In the testing phase, the network input size is also 480480. We use sliding window strategy to tile these sub segmentation results.

Just 2 questions about this:

Besides all,

Thanks.

yulequan commented 5 years ago
  1. When using the sliding window, there is overlap between different windows. If the dimension is not multiple of 480, we adjust the overlap of the last sliding window.
  2. The validation loss is only the segmentation performance of one 480*480 subimages.
  3. if my remember is correct, we perform the same mean subtraction (RGB values) for training and test samples.
  4. I forget the exact percentage of training/validation splitting. It may be 20%.
kkirtac commented 5 years ago

Hi @yulequan ,

  1. What was the overlap ratio between consecutive sliding windows (or the stride between consecutive windows)? Were you simply averaging pixels in overlapping regions while merging two subimage results?
yulequan commented 5 years ago

I forget the specific overlap ration. Yes, we use the simple averaging of the probabilities.

kkirtac commented 5 years ago

Hi @yulequan ,

I have implemented the same segmentation pipeline using keras-tf. I left %10 random portion of my training data as validation data. Then I performed the same method during preparing training samples as you explained. I finally come up with 1399 training (including the resampled background images) and 90 validation samples.

I am experiencing overfitting issues. Please see my validation error when I fine tune only final layers versus fine tuning all layers as your training prototxt suggests. During fine tuning final layers, I skipped multi-scale feature aggregation, and just performed deconvolution on the output of the final convolution layer with stride 32. How did you overcome overfitting while fine tuning all layers?

error_plot

gopikav commented 5 years ago

Hi @https://github.com/yulequan

Can you please share with us ,how you augmented the testing and training data?

zjz5250 commented 4 years ago

hi @yulequan how do you get bounding box from the annotation ground truth

Hi, @kkirtac ,@muyulin. In the segmentation task, we don't do any resampling operations. For each training image, we first find the bounding box from the annotation ground truth. If the bounding box is smaller than 480*480, then we enlarge the bounding box. After that, we crop the bounding box from the training image as subimage (You can see this subimage mainly contains the skin lesion). In order to include the background, we also crop another same size subimage randomly from the whole image (It contains the background).

In summary, we crop two subimages from one training images. These subimages are in the .lis file. When training the network, we use the caffe data layer to randomly crop 480_480 patches from these subimages as network input. In the testing phase, the network input size is also 480_480. We use sliding window strategy to tile these sub segmentation results.

Btw, the cropped subimages are a little larger than the annotation bounding boxes.

hi @yulequan how do you find the bounding box from the annotation ground truth? there are only mask files in the ground truth zip file, isn't it? do you add bounding box with the labeling tool by yourself ?

zjz5250 commented 4 years ago

@yulequan @kkirtac I find ”ignore_label: 255” in the prototxt,so the mask files should be binarized。 am I right?