Closed cepera-ang closed 7 years ago
Hi, yes you're totally right, we concatenate the input for the last block because we want to take all the information for the output. However not concatenating leads to very similar results, and it's easy to convince yourself why (e.g. imagine the last convolution just copy the previous slice of feature maps which do take all the previous blocks into account). Thank you for your remark
In the paper you explicitly mention that on the upsampling path we shouldn't concatenate dense block's inputs and outputs. However, this is what's happen in the last dense block because of way the code is written:
Those loops work this way: while there are skip connections remaining, so the last line
stack = ConcatLayer([stack, l])
have no effect:stack
is discarded at the last iteration in the inner loop and later assigned new value: concatenatedblock_to_upsample
and skip_connection. However, at the last iteration it is preserved and has not only output of the dense block but also its input and used as input to Softmax. Number of feature maps that way is the same as mentioned in paper: 256 (4*16=64 from dense block and 192 from latest TU block).There is nothing wrong with it, I'm just confused whether is it indented behavior (and clever programming trick) or it is overlooked detail that leads to fine results. In either way, it worth updating the paper (or code) with some remark about how it supposed to work. Thank you!