This is the structure of encoder and decoder of modified version of UNet (our CoarseNet):
But as we know in the original paper of UNet, each layer of encoder is concatenated to the corresponding layer in decoder. So number of layers should be same.
So I do not know how to apply these skip-connections.
One idea is to duplicate concatenation of layers with same size in encoder and decoder.
Another idea is so to separate last layers from decoder and use them as final layers without any skip-connections.
This is the structure of encoder and decoder of modified version of UNet (our CoarseNet):
But as we know in the original paper of UNet, each layer of encoder is concatenated to the corresponding layer in decoder. So number of layers should be same.
So I do not know how to apply these skip-connections. One idea is to duplicate concatenation of layers with same size in encoder and decoder. Another idea is so to separate last layers from decoder and use them as final layers without any skip-connections.