yun-liu / RCF

Richer Convolutional Features for Edge Detection
Other
753 stars 259 forks source link

the result of side_output5 is bad #33

Closed CangHaiQingYue closed 6 years ago

CangHaiQingYue commented 6 years ago

Hi, Dr liu: Thanks for your attention. I'm try to implament this work with tensorflow. After train I find that side_output 1-4 look like the result in your paper, but output 5 is bad. Can't find the reason. The same result of 1-4 pove that my model is right, but that can explain the output 5. Can you give me some advice from your rich experience which may not connect with tensorflow? Attachments are my results. side_output_1 side_output_2 side_output_3 side_output_4 side_output_5

yun-liu commented 6 years ago

How do you deal with the dilation operation in conv5? Besides, what are the learning rates in conv5?

CangHaiQingYue commented 6 years ago

Thanks for your answer. I will check my code and report later. I used tf.nn.conv2d_transpose() to do dilation operation. The weights was bilinear kernel. TF was hard to set different learning rate for different layer, so I set the all trainable weights as the same leraning rate which are 1e-6 divide by 10 per 10k iteration.

`def upsample_filt(size): """ Make a 2D bilinear kernel suitable for upsampling of the given (h, w) size. """ factor = (size + 1) // 2 if size % 2 == 1: center = factor - 1 else: center = factor - 0.5 og = np.ogrid[:size, :size] return (1 - abs(og[0] - center) / factor) * (1 - abs(og[1] - center) / factor)

def bilinear_upsample_weights(factor, number_of_classes): """ Create weights matrix for transposed convolution with bilinear filter initialization. """ filter_size = get_kernel_size(factor)

weights = np.zeros((filter_size,
                    filter_size,
                    number_of_classes,
                    number_of_classes), dtype=np.float32)

upsample_kernel = upsample_filt(filter_size)

for i in range(number_of_classes):
    weights[:, :, i, i] = upsample_kernel

return weights

def deconv(inputs, upsample_factor): input_shape = tf.shape(inputs)

# Calculate the ouput size of the upsampled tensor
upsampled_shape = tf.stack([input_shape[0],
                           input_shape[1] * upsample_factor,
                           input_shape[2] * upsample_factor,
                           1])

upsample_filter_np = bilinear_upsample_weights(upsample_factor, 1)
upsample_filter_tensor = tf.constant(upsample_filter_np)

# Perform the upsampling
upsampled_inputs = tf.nn.conv2d_transpose(inputs, upsample_filter_tensor,
                                          output_shape=upsampled_shape,
                                          strides=[1, upsample_factor, upsample_factor, 1])

return upsampled_inputs`
yun-liu commented 6 years ago

As I know, tf.nn.conv2d_transpose() is for deconvolution, not for dilation operation. You may misunderstand me. Maybe you can remove the dilation operation, but please note the upsampling scale.

CangHaiQingYue commented 6 years ago

Thanks. I found my biggest mistake is set the vgg conv_layer's lr = 0 in my code. After fixed, the results are better than before. Next question maybe to set the different learning_rate for different layer. After study this konwledge point, I will report my results. I'm wondering that can we consider dilation as a resize operation use bilinear kernel. If this opinion is true, then we set tf.nn.conv2d_transpose(trainable=False) maybe the right way. I rear your code and found that at Deconv step, you set lr = 0, I guess that is the same as trainable=False in tensorflow operation.

yun-liu commented 6 years ago

But dilation is different from resizing.

CangHaiQingYue commented 6 years ago

OK..... I will learn the code carefully.

CangHaiQingYue commented 6 years ago

Hi, I've read the paper 'HED' and 'Fully Convolutional Networks for Semantic Segmentation' . I think dilation is an simple bilinear interpolation with kernel size f. In the FCN, we call it as backwards convolution (sometimes called deconvolution). Weights can be learned or not is not important. HED proved that learned weights provide no noticeable improvements. Am I right?

yun-liu commented 6 years ago

I don't know where you got this information, but dilation is nothing with the deconvolution or bilinear interpolation at all. Dilation is just a Caffe implementation of the well-known hole algorithm. Please refer to some tutorials of deep learning for more help.

CangHaiQingYue commented 6 years ago

Sorry to waste your time...I misunderstand your mean. you mean atrous algorithm. I used tf.nn.atrous_conv2d(bottom, filt, rate=2, padding='SAME') to do dilation operation. hanks for your help. Its very kind of you!