vislearn / FFF

Free-form flows are a generative model training a pair of neural networks via maximum likelihood
MIT License
36 stars 4 forks source link

How to sample from latent space after training #2

Closed liang-zhicong closed 10 months ago

liang-zhicong commented 11 months ago

Thank you for your excellent work! After training a model, I'm wondering how to sample noise from the latent space and then use the decoder to generate samples. It would be great to have an example.

liang-zhicong commented 11 months ago

After training my own model using fff_loss, I am sampling noise from a standard normal distribution and generating samples using the decoder. However, the generated results are of poor quality. Can you provide me with some assistance? Here is the code I train and sample.

for idx, img in enumerate(train_loader):
            img = img.to(args.device)
            optimizer.zero_grad()
            loss = fff_loss(img, model.encoder, model.decoder, beta=args.beta)
            loss=loss.mean()
            loss.backward()
            optimizer.step()
def sample(model, batch_size, patch_size, device):
    z = torch.randn((batch_size, 3, patch_size, patch_size), dtype=torch.float32, device=device)
    x = model.decoder(z)
    return x
fdraxler commented 10 months ago

Hi, thanks for your question. The code you provide looks accurate and I assume that training has not converged. We usually track the negative log-likelihood for validation data to check if the model is converging. This can be computed by explicitly calculating the Jacobian for (a subset of) validation samples. Please check https://github.com/vislearn/FFF/blob/main/fff/base.py#L264 and https://github.com/vislearn/FFF/blob/main/fff/fff.py#L49 for sample code on how to achieve this.

Also please make sure that the architecture you use is compatible with the task you do.

I am closing the issue since I consider the question is answered for now, but feel free to raise another issue if you need further assistance.