HasnainRaz / FC-DenseNet-TensorFlow

Fully Convolutional DenseNet (A.K.A 100 layer tiramisu) for semantic segmentation of images implemented in TensorFlow.
MIT License
123 stars 41 forks source link

about the upsampling #1

Closed Fengmoon93 closed 6 years ago

Fengmoon93 commented 6 years ago

hey I am building the FC-dense net. in the paper. The upsampling path is only using the output of the last dense block but in your code print("Building upsample path...")

for i, block_nb in enumerate(range(self.nb_blocks - 1, 0, -1)):
    x = self.transition_up(x, x.get_shape()[-1], 'trans_up_' + str(block_nb))
    x = tf.concat([x, concats[len(concats) - i - 1]], axis=3, name='up_concat_' + str(block_nb))
    print(x.get_shape())
    x = self.dense_block(x, training, block_nb, 'up_dense_block_' + str(block_nb))

here x is the last skip list tensor--->concats[4] maybe you should write this: x=dense before the for loop,so that x will be the output of the dense block thanks.

HasnainRaz commented 6 years ago

Can you explain more? The upsampling follows the logic: Bottleneck -> Upsample -> concat with same scale downward path features -> Dense Block -> upsample -> concat with same scale downward path feature -> upsample ...and so on

Fengmoon93 commented 6 years ago

the transpose convolution is applied only to the feature map obtained by the last dense block in your code,and the after the downsampling block x is the TD output,and the dense is the output(5+1+5 DBs) I think the input of the umsampling path should be the final dense output in the for loop so I made a little change

       for index in range(6):
            print("generating the "+str(index)+" DB")
            print("the input feature is ",x.get_shape())
            dense = self.dense_block(x, training,self.n_layers_per_block[index], 'down_dense_block_' + str(index))
            print('the output of dense_block' + str(index) + ": the feature size is ", dense.get_shape())
            if(index!=5):
                print("the concatenate "+str(index))
                x = tf.concat([x, dense], axis=-1, name='down_concat_' + str(index))
                # print('down_concat_' + str(index) + ": the feature size is ",x.get_shape())
                concats.append(x)
                print("after the concatenate,the input is ",x.get_shape())
                print("generating the "+str(index)+" TD block")
                x=self.transition_down(x, training, x.get_shape()[-1], 'trans_down_' + str(index))
                print('after the down_down_ ' + str(index) + " : the feature size is ", x.get_shape())
        concats=concats[::-1]
        # print("the concats features are : ",concats)
        print("Building upsample path...")
        part=6
        print("the dense feature from the last DB ",dense)
        x=dense
        for pointer in range(5):
            print("generating the " + str(pointer) + " TU")
            print("the input of the TD is ",x)
            x = self.transition_up(x, x.get_shape()[-1], 'trans_up_' + str(pointer))
            print('after the trans_up_' + str(pointer)+": the feature size is ",x.get_shape())
            x = tf.concat([x, concats[pointer]], axis=-1, name='up_concat_' + str(pointer))
            print('up_concatenate' + str(pointer) + ": the feature size is ", x.get_shape())
            x = self.dense_block(x, training, self.n_layers_per_block[part+pointer], 'up_dense_block_' + str(pointer))
            print('up_dense_block_' + str(pointer) + ": the feature size is ", x.get_shape())
 first convolution...
the output tensor is :  (4, 672, 672, 48)
done...
Building downsample path...
generating the 0 DB
the input feature is  (4, 672, 672, 48)
the output of dense_block0: the feature size is  (4, 672, 672, 48)
the concatenate 0
after the concatenate,the input is  (4, 672, 672, 96)
generating the 0 TD block
after the down_down_ 0 : the feature size is  (4, 336, 336, 96)
generating the 1 DB
the input feature is  (4, 336, 336, 96)
the output of dense_block1: the feature size is  (4, 336, 336, 48)
the concatenate 1
after the concatenate,the input is  (4, 336, 336, 144)
generating the 1 TD block
after the down_down_ 1 : the feature size is  (4, 168, 168, 144)
generating the 2 DB
the input feature is  (4, 168, 168, 144)
the output of dense_block2: the feature size is  (4, 168, 168, 48)
the concatenate 2
after the concatenate,the input is  (4, 168, 168, 192)
generating the 2 TD block
after the down_down_ 2 : the feature size is  (4, 84, 84, 192)
generating the 3 DB
the input feature is  (4, 84, 84, 192)
the output of dense_block3: the feature size is  (4, 84, 84, 48)
the concatenate 3
after the concatenate,the input is  (4, 84, 84, 240)
generating the 3 TD block
after the down_down_ 3 : the feature size is  (4, 42, 42, 240)
generating the 4 DB
the input feature is  (4, 42, 42, 240)
the output of dense_block4: the feature size is  (4, 42, 42, 48)
the concatenate 4
after the concatenate,the input is  (4, 42, 42, 288)
generating the 4 TD block
after the down_down_ 4 : the feature size is  (4, 21, 21, 288)
generating the 5 DB
the input feature is  (4, 21, 21, 288)
the output of dense_block5: the feature size is  (4, 21, 21, 48)
Building upsample path...
the dense feature from the last DB  Tensor("down_dense_block_5/concat_4:0", shape=(4, 21, 21, 48), dtype=float32)
generating the 0 TU
the input of the TD is  Tensor("down_dense_block_5/concat_4:0", shape=(4, 21, 21, 48), dtype=float32)
after the trans_up_0: the feature size is  (4, 42, 42, 48)
up_concatenate0: the feature size is  (4, 42, 42, 336)
up_dense_block_0: the feature size is  (4, 42, 42, 48)
generating the 1 TU
the input of the TD is  Tensor("up_dense_block_0/concat_4:0", shape=(4, 42, 42, 48), dtype=float32)
after the trans_up_1: the feature size is  (4, 84, 84, 48)
up_concatenate1: the feature size is  (4, 84, 84, 288)
up_dense_block_1: the feature size is  (4, 84, 84, 48)
generating the 2 TU
the input of the TD is  Tensor("up_dense_block_1/concat_4:0", shape=(4, 84, 84, 48), dtype=float32)
after the trans_up_2: the feature size is  (4, 168, 168, 48)
up_concatenate2: the feature size is  (4, 168, 168, 240)
up_dense_block_2: the feature size is  (4, 168, 168, 48)
generating the 3 TU
the input of the TD is  Tensor("up_dense_block_2/concat_4:0", shape=(4, 168, 168, 48), dtype=float32)
after the trans_up_3: the feature size is  (4, 336, 336, 48)
up_concatenate3: the feature size is  (4, 336, 336, 192)
up_dense_block_3: the feature size is  (4, 336, 336, 48)
generating the 4 TU
the input of the TD is  Tensor("up_dense_block_3/concat_4:0", shape=(4, 336, 336, 48), dtype=float32)
after the trans_up_4: the feature size is  (4, 672, 672, 48)
up_concatenate4: the feature size is  (4, 672, 672, 144)
up_dense_block_4: the feature size is  (4, 672, 672, 48)
the logits : (4, 672, 672, 19)

maybe I am wrong,thanks for answering me ~ but..I think the HeUniform initializer really work?

HasnainRaz commented 6 years ago

Ah, good catch, I pushed a fix, thanks for reporting this.