lartpang / Machine-Deep-Learning

:wave: ML/DL学习笔记(基础+论文)
258 stars 51 forks source link

ResNet代码-瓶颈层输出与快捷链接匹配的问题 #1

Closed lartpang closed 5 years ago

lartpang commented 5 years ago

https://github.com/lartpang/ML_markdown/Net-Paper/ResNet总结(2015).md

@slim.add_arg_scope
def bottleneck(inputs,
               depth,
               depth_bottleneck,
               stride,
               rate=1,
               outputs_collections=None,
               scope=None,
               use_bounded_activations=False):

  with tf.variable_scope(scope, 'bottleneck_v1', [inputs]) as sc:
    depth_in = slim.utils.last_dimension(inputs.get_shape(), min_rank=4)
    # 创建快捷链接
    # 如果输出的深度和输入相同,则不需要调整快捷链接的深度,下采样即可,
    # 若是深度不匹配,则需要利用1x1卷积来实现通道匹配
    if depth == depth_in:
      shortcut = resnet_utils.subsample(inputs, stride, 'shortcut')
    else:
      shortcut = slim.conv2d(
          inputs,
          depth, [1, 1],
          stride=stride,
          activation_fn=tf.nn.relu6 if use_bounded_activations else None,
          scope='shortcut')

    residual = slim.conv2d(inputs, depth_bottleneck, [1, 1], stride=1,
                           scope='conv1')
    # 创建卷积层
    residual = resnet_utils.conv2d_same(residual, depth_bottleneck, 3, stride,
                                        rate=rate, scope='conv2')
    residual = slim.conv2d(residual, depth, [1, 1], stride=1,
                           activation_fn=None, scope='conv3')

    if use_bounded_activations:
      # Use clip_by_value to simulate bandpass activation.
      residual = tf.clip_by_value(residual, -6.0, 6.0)
      output = tf.nn.relu6(shortcut + residual)
    else:
      output = tf.nn.relu(shortcut + residual)

这里的 shortcut + residual 需要要求两者宽高参数一致么?

lartpang commented 5 years ago

是的。不然如何相加。