-
Using `Exponential(x)` distribution should be equivalent to using `Gamma(1,x)`, yet I'm getting different results when I use one or the other as the approximation of the posterior in `KLqp`. For exam…
-
Post your questions here about: “[The Datome - Finding, Wrangling and Encoding Everything as Data](https://docs.google.com/document/d/1vg-W55u3naN1gPmMyPhBWyIS8dJovdvlh6NUKMiFEjo/edit?usp=sharing)”, “…
-
@ziatdinovmax as we discussed, I plan on implementing a simulated campaigning loop for tuning the "hyper parameters" of an optimization loop. I first want to learn this library inside and out so it mi…
-
Having `aic` or `gcv` available for the models that were estimated during model selection will give as an extra check or evaluation of all or top candidate models.
#6268 for aic/bic based model sel…
-
We need some extra optimisers. We should aim to add at least one implementation of each of the following:
L-M algorithm
Grid search
Genetic Algorithm
Nested Sampling
For some of them we can u…
-
I don't see an open issue for this, and just found a statistics article for it for QIF-GEE
To get started: using principal components
Cho and Qu use principal components on a subset of moment co…
-
OK, so I now have a slightly better understanding of the resampling buffer, and so I thought it would be useful to turn the discussion about the particle filter where the model and inference are on di…
-
Post questions here for this week's fundamental readings: Grimmer, Justin, Molly Roberts, Brandon Stewart. 2022. Text as Data. Princeton University Press: Chapters 23, 24, 25, 26, 27 —“Prediction”, “C…
-
Hi, Great work! nd rasterizer is around 10x slower than sh rasterizer. To be precise, my model inference time with sh rasterization is 0.008s which gives me >100FPS as described in the original gaussi…
-
Here I would like to train a GP model on a very high dimension X, I will first decompose the X into 27 subspace_dim and then uses the addition of 27 MaternKernels as covar_module, however, the speed i…