Closed vinsis closed 5 years ago
No. This is because each decoder, from #1 to #5 has been trained separately, along with respective encoder from #1 to #5, for general-purpose image reconstruction. Therefore the weights of decoders are different also in the layers they have in common.
Thanks.
Hi, a quick question - since decoder #5 contains all the layers that decoder #4 has, is it possible to share the weights for the common layers between them? Right now the weights are totally separate.
I believe sharing the weights would lead to saving computation and time. Thanks.