-
## Description
I'd like to suggest the implementation of implicit reparameterization gradients, as described in the paper [1], for the Gamma distribution: ndarray.sample_gamma and symbol.sample_gamma…
-
### Issue Description
**TL; DR**
Support for MLE, MAP, and Variational inference!
**Context**
In situations where scalability and speed need to be balanced with posterior sample quality, various…
-
```
Hi,
I have little knowledge on DP. However, I used the code to estimate the density of some mixtures and try to get the mixture components, It seems that the algorithm produces different resul…
-
Can someone tell me why we optimize NELBO? In the paper it only said "We optimize the ELBO with respect to the variational parameters." As far as I understand it D-ETM consists of three neural network…
-
```
Hi,
I have little knowledge on DP. However, I used the code to estimate the density of some mixtures and try to get the mixture components, It seems that the algorithm produces different resul…
-
### 😎 Responsibility
@wh4044, @juk1329 , @Glanceyes
### 💡Issue
- @juk1329님이 실험한 LightGCN으로 문제 추천 결과 Inference 시 유저의 성향에 과도하게 편향된 결과가 나온다는 보고가 있었습니다.
- LightGCN의 학습된 유저 또는 태그 임베딩을 Multi-D…
-
this is easy in Edward: take any model already in Edward. define the variational family using [`TransformedDistribution`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/distri…
-
**Point Cloud Completion**
- "Topnet: Structural point cloud decoder", CVPR 2019
- "3D Shape Completion with Multi-view Consistent Inference", AAAI 2020
- "Morphing and Sampling Network for Dense P…
-
It would be worth having something generic for all things related to stochastic approximations, to be separated from variational inference itself. E.g., a sgd class to have different stochastic gradie…
-
I am new to Edward.
The samples drawn from the prior in Your first Edward program (Jupyter notebook version) seem to be drawn from qW_0, qW_1, qb_0 and qb_1 instead of W_0, W_1, b_0 and b_1. Shoul…