Closed Youskrpig closed 3 years ago
another question, still this line ,
model = make_arch(config_type, cfg, use_bias, True).cuda()
Bn is setted True, so bias=False,I see in Function make_layers:the weight of con is used by torch.nn.init.xavieruniform(conv2d.weight)
but why there is no BN initialization?
Hi, Thanks for your interest. We explore different setups in our ablation study, including random weight Vgg(pretrain=false). Results indicate that the source network with pre-trained weights leads to better results. So we recommend Vgg(pretrain=True) for source initialization.Please refer to the paper for more information. On Tuesday, April 13, 2021, 01:23:40 PM GMT+4:30, Guanhua Feng @.***> wrote:
Hi,thanks for your great work! Here is two lines code in network.py vgg = Vgg16(pretrain).cuda() model = make_arch(config_type, cfg, use_bias, True).cuda() I wanna ask why you didn't set model = Vgg16(pretrain = False).cuda() , is there any difference?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Hi, Thanks for your interest. We explore different setups in our ablation study, including random weight Vgg(pretrain=false). Results indicate that the source network with pre-trained weights leads to better results. So we recommend Vgg(pretrain=True) for source initialization.Please refer to the paper for more information. On Tuesday, April 13, 2021, 01:23:40 PM GMT+4:30, Guanhua Feng @.***> wrote: Hi,thanks for your great work! Here is two lines code in network.py vgg = Vgg16(pretrain).cuda() model = make_arch(config_type, cfg, use_bias, True).cuda() I wanna ask why you didn't set model = Vgg16(pretrain = False).cuda() , is there any difference? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Ummm, i mean the cloner model, not the source model.
The cloner model is a modified version of VGG network, and it must be initialized with random weights. Also, batch normalization is initialized like what has been done in the code. For more information please refer to this link : https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html
Hi,thanks for your great work! Here is two lines code in network.py vgg = Vgg16(pretrain).cuda()
model = make_arch(config_type, cfg, use_bias, True).cuda()
I wanna ask why you didn't set model = Vgg16(pretrain = False).cuda() , is there any difference?