boathit / deepst

0 stars 0 forks source link

Problem with Generative NN2 #12

Closed jayantjain100 closed 4 years ago

jayantjain100 commented 4 years ago

Hi,

I was trying to use the VAE technique in your paper for learning proxies for (lat,lon) coordinates, but was facing some issues. To debug, I was specifically looking into the Generative NN2 module and feeding it the correct Pi vectors to see whether it was able to estimate the Gaussian parameters accordingly.

The Generative Module I'm using is,

class ProbX(nn.Module):
    def __init__(self, k):
        super(ProbX, self).__init__()
        self.M = nn.Linear(k, 2)
        self.logS = nn.Linear(k, 2)

    def reparameterize(self, mu, logvar):
        if self.training:
            std = torch.exp(0.5*logvar)
            eps = torch.randn_like(std)
            return eps.mul(std).add(mu)
        else:
            return mu

    def forward(self, pi):
        mu = self.M(pi)
        logvar = self.logS(pi)
        return(self.reparameterize(mu, logvar), mu, logvar)

and for training, the loss is the reconstruction loss (negative log likelihood),

def reconstruction_loss(x, M, logS):
    #negative log likelihood
    return 0.5 * torch.sum(logS + torch.pow((x-M),2)/(torch.exp(logS)))

I had written a simple experiment(see attached file below) to test this, but the loss does not converge (decreases till minimum, then increases again, and repeats).

to run the code python3 test_decoder_independent.py

In the experiment, I generate points according to a known Gaussian distribution (known parameters) and then try to learn the mu and sigma values in the generator's parameters.

Maybe I'm missing something in my code (do we need some other loss also). If you could help me fix this then that would be awesome.

Thanks, Jayant

generative_nn2_experiment.zip