Closed owensca closed 5 years ago
If I set layers = 5, then from the code below it seems that the number of down-layers will be 5 and the number of up-layers will be 4. This would yield a total of 9 convolutional layers altogether. Is this correct?
From 'unet.py':
# down layers for layer in range(0, layers): features = 2**layer*features_root stddev = np.sqrt(2 / (filter_size**2 * features)) if layer == 0: w1 = weight_variable([filter_size, filter_size, channels, features], stddev) else: w1 = weight_variable([filter_size, filter_size, features//2, features], stddev) w2 = weight_variable([filter_size, filter_size, features, features], stddev) b1 = bias_variable([features]) b2 = bias_variable([features]) conv1 = conv2d(in_node, w1, keep_prob) tmp_h_conv = tf.nn.relu(conv1 + b1) conv2 = conv2d(tmp_h_conv, w2, keep_prob) dw_h_convs[layer] = tf.nn.relu(conv2 + b2) weights.append((w1, w2)) biases.append((b1, b2)) convs.append((conv1, conv2)) size -= 4 if layer < layers-1: pools[layer] = max_pool(dw_h_convs[layer], pool_size) in_node = pools[layer] size /= 2 in_node = dw_h_convs[layers-1] # up layers for layer in range(layers-2, -1, -1): features = 2**(layer+1)*features_root stddev = np.sqrt(2 / (filter_size**2 * features)) wd = weight_variable_devonc([pool_size, pool_size, features//2, features], stddev) bd = bias_variable([features//2]) h_deconv = tf.nn.relu(deconv2d(in_node, wd, pool_size) + bd) h_deconv_concat = crop_and_concat(dw_h_convs[layer], h_deconv) deconv[layer] = h_deconv_concat w1 = weight_variable([filter_size, filter_size, features, features//2], stddev) w2 = weight_variable([filter_size, filter_size, features//2, features//2], stddev) b1 = bias_variable([features//2]) b2 = bias_variable([features//2]) conv1 = conv2d(h_deconv_concat, w1, keep_prob) h_conv = tf.nn.relu(conv1 + b1) conv2 = conv2d(h_conv, w2, keep_prob) in_node = tf.nn.relu(conv2 + b2) up_h_convs[layer] = in_node weights.append((w1, w2)) biases.append((b1, b2)) convs.append((conv1, conv2)) size *= 2 size -= 4
Yes this is correct and corresponds to Fig. 1 in Ronnebergs et al paper
If I set layers = 5, then from the code below it seems that the number of down-layers will be 5 and the number of up-layers will be 4. This would yield a total of 9 convolutional layers altogether. Is this correct?
From 'unet.py':