-
Hi,
I was going to work on exercise 9 from Chapter 17 (denoising autoencoder), and wanted to try using the best classifier I had trained so far on MNIST digits which is an SE-ResNET, as a basis for…
-
Hello! I saw your recent preprint, "Scalable estimation of microbial co-occurrence networks with Variational Autoencoders" and I'm hopeful your method may solve my issues, but I wanted to touch base t…
-
This is somewhat of an elevator pitch referring to the profit meme in https://github.com/DiffSharp/DiffSharp/issues/69#issuecomment-586476537
My proposal for the `????` part is to consider Fable in…
pkese updated
4 years ago
-
To add a loss and metrics to a model, I can add them to `model.compile(loss=..., metrics=...)`, provided that they have the signature `fn(y_true, y_pred)`, see the [docs](https://keras.io/api/models/m…
-
## Background
Backpropagation through random variables is no easy task. Two main methods are often adopted for derivative estimation: score function estimator and pathwise derivative estimator (see h…
-
Sik-Ho Tang. [Review — BEiT: BERT Pre-Training of Image Transformers](https://sh-tsang.medium.com/review-beit-bert-pre-training-of-image-transformers-c14a7ef7e295).
-
Graph generative models are important for the tasks we have been describing.
The core idea is to posit a model which defines some distribution over graphs ```P(G)```, for instance via a low dimensi…
-
Hi,
I have some questions regarding the implementation, and I can't reproduce the perplexities reported in the paper.
1. I'd be interested in #5, as well.
2. I can't reproduce the results mention…
-
Currently, there is a comment on the counts measurement process about differentiability.
![image](https://user-images.githubusercontent.com/43949391/184201365-bee538de-21f4-4f9d-b7dd-df1d27e89799.png…
-