Closed zhangmozhe closed 6 years ago
Im not sure since I didn't test the pytorch implementation. It might be that in the style loss we concatenate the features from all layers to one large tensor.
בתאריך יום ו׳, 24 באוג׳ 2018, 12:07, מאת zhangmozhe < notifications@github.com>:
Hi, I try to use your pytorch version contextual loss for style transfer. Both the content and style use contextual loss. But the result seems not meaningful. The pytorch code use L2 distance which is different from the paper (cosine distance). I wonder whether I need to normalize the features before feeding them into the contextual loss. Is there any suggestion for my failure on style transfer? Thanks a lot!
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/roimehrez/contextualLoss/issues/14, or mute the thread https://github.com/notifications/unsubscribe-auth/AOoKJoF6pYeVf4gfMOwb4GrLfpXxlpmKks5uT9BsgaJpZM4WLF3q .
Hi, I try to use your pytorch version contextual loss for style transfer. Both the content and style use contextual loss. But the result seems not meaningful. The pytorch code use L2 distance which is different from the paper (cosine distance). I wonder whether I need to normalize the features before feeding them into the contextual loss. Is there any suggestion for my failure on style transfer? Thanks a lot!