Open christopher-beckham opened 6 years ago
Can confirm that with a completely different, self-made Tensorflow implementation that the estimated Wasserstein distances get very very large. Also don't really know what is causing it... Normally values are in the range of 0 to 10 or 20, when using WGAN-GP
Doing it with GP would be counterintuitive though, since the spec norm is meant to be a (computationally cheaper) replacement for it. But thanks for reporting that on your side.
Yes I did not use GP + spectral norm at the same time, rather I used a lot of WGAN-GP, and my experience there was that the estimated Wasserstein distance was usually between 0 to 10 or 20. Then I removed the GP, and replaced it with the spectral normalization, but kept everything else the same (including Wasserstein loss), and now the estimated Wasserstein distances are all over the place, in the millions etc.
Yes I did not use GP + spectral norm at the same time, rather I used a lot of WGAN-GP, and my experience there was that the estimated Wasserstein distance was usually between 0 to 10 or 20. Then I removed the GP, and replaced it with the spectral normalization, but kept everything else the same (including Wasserstein loss), and now the estimated Wasserstein distances are all over the place, in the millions etc.
Are you sure you're dividing the weights of the convolution layer by the spectral norm correctly? If you implemented it correctly, it shouldn't reach such a high number.
I had a similar issue at the start where i was multiplying by the spectral norm instead of dividing by it, so that could be the issue for it reaching millions.
Hi,
I'm using recently-released PyTorch 0.4 (not sure if that's causing the funky numbers I'm getting), but I'm getting the following (with
python main.py --model resnet --loss wasserstein
):Is this meant to happen?
Thanks!