-
In the codes, a loss objective named with Wasserstein is defined as follows.
```def wasserstein(y_true, y_pred):
return K.mean(y_true * y_pred)
```
After reading the original paper, we know WG…
-
Hi @bamos,
I'm just getting started with `qpth` and am just a beginner with mathematical optimisation. To be honest I'm not so sure I know what I'm doing?
Can I ask you advice please? Is it pos…
ghost updated
7 years ago
-
@bobchennan Hi Chennan, glad to see your implementations of WGANs. Thank you for your contribution. When reading your codes, I see two main changes that you have made:
(1) Line 55: K.mean(target\*out…
-
WGAN seems really slow, I tried your code with 10 million iterations, but the output images still looks very bad.
Do you have any idea?
Thanks!
-
D for real images output is always negative, for generated images, the output is always positive.
Why does this happen?
And, I find that training for a long time later, D's Loss tends to 0, and G'…
-
I used the same `calc_GradientPenalty` method as yours and the latest master branch of pytorch('0.1.12+625850c'). But it stuck at `penalty.backward()` with an error
> "RuntimeError: there are no g…
HRLTY updated
7 years ago