Closed joshthoward closed 7 years ago
@snurkabill
Hello,
can you, please, provide here full usage of your autoencoder? I can't run any code here now, so I would like to at least see it.
Thanks
It is easiest to repeat scripting. I did the following from the autoencoder
directory.
If you make the change to the source code that I mentioned before, the same script will result in the second error.
Ok. if I recall it correctly, you must first perform at least one partial fit on variational autoencoder and then you can generate new sequences. Please, try that and let me know if that works. Maybe that's corrupted usecase and there really is mistake.
I just attempted to do what you said and it resulted in the same errors. I agree that the result of calling the function prior to training should be non sensical, but I don't think that it should result in a syntax error.
Now looking at code, I believe I've found the problem. (btw, I still cannot run python at all at my settings right now so I can't to try anything). Isn't really problem that self.weights["b1"] is actually a vector and not it's size? therefore np.random has problem with that and can't generate random vector.
Yes, that is the problem that causes the first error, the second error results from calling it correctly and this is where I believe that there is a logical error in the model.
The point is, that if one is calling generate() method, reconstruction is created from randomly generated latent space (z)
I agree with your statement, but I don't understand your point.
Since the variational autoencoder has z
as the normally distributed latent variable, rather than z_mean
, this should be provided as input to the DAG in the generate function. I have submitted a pull request to fix this.
The real topic that I wanted to discuss was whether z
can actually be considered normal. It is shifted and scaled from the normal distribution even though this shifting and scaling is minimized by the latent loss. I was able to affirm that this was the case by following along to page 8 in a Tutorial on Variational Autoencoders and by generating some test cases.
In the
autoencoders/autoencoder_models
directory, the generate() function of theVariationalAutoencoder
is broken. The following error results from using it:This is fixed by calling
numpy.random.normal(size=[1,self.n_hidden])
instead, but this creates the new error:I believe that this comes from the fact that
self.x
is required to generateself.reconstruction
fromself.z_mean
whenself.z
is not.In terms of a solution, I have seen other implementations of VAEs where they have a "generation" function that feeds a normal random variable into the
self.z
parameter, but the math does not quite work out sinceself.z
is shifted and scaled from the normal distribution. I ran some summary statistics betweenself.z
andself.z_mean
and they were very close under the MNIST data. I am mainly wanting thoughts on this.