zhanghang1989 / PyTorch-Multi-Style-Transfer

Neural Style and MSG-Net
http://hangzh.com/PyTorch-Style-Transfer/
MIT License
977 stars 206 forks source link

VGG network output #28

Open AlexTS1980 opened 5 years ago

AlexTS1980 commented 5 years ago

The variable y is the output of the style model and x_c is a copy of the image. Both are run through the VGG network that outputs 4 arrays with 64, 128,512 and 512 maps. To get the content loss, only the second ( features_y[1], features_xc[1].data).

Why was this specific layer selected for the content loss computation, rather that the one(s) with more maps?