stitchfix / fauxtograph

Tools for using a variational auto-encoder for latent image encoding and generation.
MIT License
226 stars 54 forks source link

image loss function? #15

Open dribnet opened 8 years ago

dribnet commented 8 years ago

VAE and VAEGAN code is currently using mean squared error as the reconstruction loss function. In most papers / implementations, I'm more used to seeing binary cross entropy with numbers reported in nats.

Curious what we think would be best here. I did do a quick look for this in the chainer docs but didn't see binary cross entropy listed as one of the built in loss functions.

tjtorres commented 8 years ago

Any chance you might point me to sources? I have seen BCE used to more accurately reflect the distribution of the data when binary (for instance if training on the MNIST set), but I am not sure I see the benefit to using it for continuous pixel values as in most images. I am definitely willing to change this though if there is compelling evidence that it would be a good idea, so please post the papers/implementations and I will take a look.

dribnet commented 8 years ago

I'm most familiar with DRAW, which says (section 4):

cross_entropy

Will try to track down something more recent to see if this is best practice more broadly.

cemoody commented 8 years ago

There's a sigmoid cross entropy available, which might be of use here.

umguec commented 8 years ago

This is from Kingma and Welling (2013):

We let pθ(x|z) be a multivariate Gaussian (in case of real-valued data) or Bernoulli (in case of binary data) whose distribution parameters are computed from z with a MLP (a fully-connected neural network with a single hidden layer, see appendix C).

c

Here is a more recent paper in which a similar formulation is used.

Chainer has gaussian_nll and bernoulli_nll loss functions for VAE.

tjtorres commented 8 years ago

It definitely makes sense to add the Bernoulli negative log likelihood if one wishes to look at Bernoulli distributed posterior data distributions as in say MNIST, though I hadn't envisioned that being a big use case initially. However, after recently trying to use the package to train over a font dataset, and realizing performance was somewhat hindered if I didn't artificially induce continuity with a slight Gaussian filtering, I think it's probably a good idea to include this as a loss option. The gaussian NLL is quite similar to MSE assuming unit covariance, but they do differ somewhat and I'd be willing to adopt that as additional option too, since implementing both is rather easy (as you point out they both already exist in Chainer). I will assign myself to this unless there are volunteers.

dribnet commented 8 years ago

I'm hoping to use binarized MNIST (with validation data) as a sanity check to compare the NLL test score fauxtograph can achieve against other generative implementations.

tjtorres commented 8 years ago

Sounds great! Should be quite fast to validate over MNIST, though I think the MNIST set will be too small to use with the convolution architecture currently available. MNIST images are 28x28 and fauxtograph supports 32x32 at the smallest. A simple workaround would be to preprocess the set and add in a 2 pixel black border to all sides. I have also been thinking of adding a conditional semi-supervised option or an adversarial auto encoder class at some point as well. Would be good to benchmark all.

abhinav3 commented 5 years ago

I've tried both BCELoss and MSELoss for CIFAR10 dataset reconstructions using Autoencoder. MSELoss is giving better looking reconstructed images than BCELoss.