Closed haoliyoupai09 closed 6 years ago
Hi. When training the parent model, we set and freeze the weights of the deconvolutional layer to bilinear upsampling (see link below). For the online training scripts, we simply copy those weights and use them unaltered. It is possible to train these weights, but our results were practically the same.
Oh, I see. Thanks a lot.
The weight_filler in the deconvolutional layer is missed and the lr_mult is set to be 0. I wonder whether it means that the weight of the deconvolutional layer is initialized in a default way and the weight is not updated after the initialization. If this is the case, why not update the weight? Any help is appreciated, and looking forward to your early reply.