I watched a presentation from Artomatix and they have some arguments for using a covariance loss instead of a gram loss. You can flip between the two (I think it's correct..) by doing
class GramMatrix(nn.Module):
def forward(self, input):
B, C, H, W = input.size()
x_flat = input.view(C, H * W)
#Add this for covariance loss
x_flat = x_flat - x_flat.mean(1).unsqueeze(1)
return torch.mm(x_flat, x_flat.t())
I didn't experiment with it much, but using the default content/style at 1024, you get these:
Gram Loss:
Covariance Loss:
I wouldn't say it's better, but it is interesting it adds more texture to the sky. Might have some utility for more textured styles?
Thoughts?
p.s. Nice implementation! Happy there's a true to jcjohson, cuda 10, pytorch impl
I watched a presentation from Artomatix and they have some arguments for using a covariance loss instead of a gram loss. You can flip between the two (I think it's correct..) by doing
I didn't experiment with it much, but using the default content/style at 1024, you get these: Gram Loss:![gram](https://user-images.githubusercontent.com/36798976/51767404-3dcf4100-20ab-11e9-89c9-d9c186098c18.png)
Covariance Loss:![cov](https://user-images.githubusercontent.com/36798976/51767408-4162c800-20ab-11e9-9e23-29ed99a1a75d.png)
I wouldn't say it's better, but it is interesting it adds more texture to the sky. Might have some utility for more textured styles?
Thoughts?
p.s. Nice implementation! Happy there's a true to jcjohson, cuda 10, pytorch impl