categorical variational autoencoder using the Gumbel-Softmax estimator
MIT License
425
stars
101
forks
source link
Great thanks for your code. I have a question. If I have a matrix, whose elements is discrete (i.e., binary state 0 or 1), needed to be trained under a given loss function, how should I train it? #8
Gumbel-Softmax is designed for inference of discrete latent variable inference, not discrete optimization (w.r.t elements of a matrix). I.e. it is applicable in settings where you want to backprop through samples from a discrete distribution. In the case of matrices, you might apply GS to matrix samples from a Wishart distribution.
Gumbel-Softmax is designed for inference of discrete latent variable inference, not discrete optimization (w.r.t elements of a matrix). I.e. it is applicable in settings where you want to backprop through samples from a discrete distribution. In the case of matrices, you might apply GS to matrix samples from a Wishart distribution.
Does that help?