I have a question, LPBNN_layers line 75 is:
embedded_mean, embedded_logvar=self.encoder_fcmean(embedded),self.encoder_fcmean(embedded)
Should this not be:
embedded_mean, embedded_logvar=self.encoder_fcmean(embedded),self.encoder_fcvar(embedded)
As it is, it appears to be enforcing that the mean and logvar are the same. This bug is present in every layer defined in LPBNN_layers.
Additionally, I was wondering why VAE embedding is applied only for alpha and not for gamma. Is there a benefit to only defining alpha as Bayesian? Was this the case for the results reported in your paper?
Hello, thanks for providing your code.
I have a question, LPBNN_layers line 75 is:
embedded_mean, embedded_logvar=self.encoder_fcmean(embedded),self.encoder_fcmean(embedded)
Should this not be:embedded_mean, embedded_logvar=self.encoder_fcmean(embedded),self.encoder_fcvar(embedded)
As it is, it appears to be enforcing that the mean and logvar are the same. This bug is present in every layer defined in LPBNN_layers.Additionally, I was wondering why VAE embedding is applied only for alpha and not for gamma. Is there a benefit to only defining alpha as Bayesian? Was this the case for the results reported in your paper?