You said in the paper, "To train the FCRN, we first crop an
sub-image from every original dermoscopy image with ground
truth by automatically figuring out the smallest rectangle
containing the lesion region and enlarging its length and width
by 1.1 -1.3 times in order to include more neighboring
pixels for training."
For segmantation, before input to the FCRN, whether you will resize the cropped sub-image to a fixed size? Or, you just use them as input?
Another question, whether the image size in a batch is same for segmantation?
Thank you for your sharing!
You said in the paper, "To train the FCRN, we first crop an sub-image from every original dermoscopy image with ground truth by automatically figuring out the smallest rectangle containing the lesion region and enlarging its length and width by 1.1 -1.3 times in order to include more neighboring pixels for training."
For segmantation, before input to the FCRN, whether you will resize the cropped sub-image to a fixed size? Or, you just use them as input?
Another question, whether the image size in a batch is same for segmantation?
Thank you!