Open sjfleming opened 4 months ago
On 2/24/24, we worked out that there is a generative model that involves P_np (specifically a model with the full attention mechanism as part of the generative model) for which the math is exactly the same, if we impose ARD as a posterior regularization rather than as something that arises naturally from the ELBO.
Currently the generative model for cellcap does not involve $P_{np}$ . This means that the model does not capture some of the major things we care about in the data. While this works for now, since we have a very expressive posterior, it could be better. Inference often improves when the model gets better.
Consider including $P{np}$ in the model in such a way that random samples from the model yield something much more like a simulated data distribution. (Consider getting the covariates $D{nc}$ involved as well.)
This amounts to coming up with a better prior overall.