I have noticed that an extra set of ResBlock + optional AttentionBlock layers are added in the decoder compared to the encoder in the UNet implementation. In the paper it mentions that the encoder and decoder are mirrors of each other. Is this a bug?
I have noticed that an extra set of ResBlock + optional AttentionBlock layers are added in the decoder compared to the encoder in the UNet implementation. In the paper it mentions that the encoder and decoder are mirrors of each other. Is this a bug?