-
> Pertinent to above and @seabbs comment on using `upjitter`. Using the softplus transform instead of an upjitter (or clamp to do relu) stabilises the sampling. Also, using `LogExpFunctions.xexpy` to …
-
### Feature:
A new `MMDTraceELBO` class, that will implement a Maximum Mean Discrepancy between samples from guide an from model instead of KL-divergence as in `TraceELBO` class.
### Motivation:
…
-
-
@henryrscharf @cboettig
Likelihood / posterior distributions from a single trajectory vs from the ensemble
-
This scenario does not generate and error:
```python
posterior = JointDistribution([y, x]) # x and y are distributions that does not depend on z
print(posterior.get_parameter_names())
d = poste…
-
I got quite excited about the `Soss.jl` presentation in JuliaConn 2021 that I decided to give it a go by implementing a mixture of Gaussian model. I borrowed [the example from Turing.jl](https://turin…
-
**Describe the bug**
I am implementing SNRE and SNLE (from [implemented algorithms](https://sbi-dev.github.io/sbi/latest/tutorials/16_implemented_methods/)) on a simple exponential simulator model wi…
-
Neither the `posterior_predictive` nor the `posterior_predictive_branch` samplers check for the existence of functional "r" random generation functions. This causes a problem in the case of user-defi…
-
Hi Ax Team,
I am trying to implement a Service API version of the safe optimization idea floated by @Balandat [here](https://github.com/pytorch/botorch/discussions/2240#discussioncomment-8701003);…
-
Currently, the Distributions have already provided a large collection of distributions and for some of them, MLE is implemented, as follows
``` julia
d = fit_mle(D,X);
```
However, the problem is mo…