Closed zhang-lingyun closed 3 years ago
Hi,
We did this because the original VGG network we used is transferred from Caffe model. The value of the input is [0, 255] and we need to give a small weight to it. If you use the VGG network from pytorch-pretraining VGG. This 0.001 could be something like 1. or 0.1.
I see, thanks .
why you did this in you vgg loss
return self.loss(x_features, gt_features, reduction='mean') * 0.001
? 0.001 shrink the loss a lot. what the popurse behind this? thanks .