Closed mschauer closed 2 years ago
Does this do what you want? https://github.com/cscherrer/MeasureBase.jl/blob/master/src/combinators/powerweighted.jl
So I think you'd write
y[i] ~ Soss.Bernoulli(logistic(v)) ↑ (N/K)
or (maybe compare performance on these)
y[i] ~ Soss.Bernoulli(logitp = v) ↑ (N/K)
BTW I don't think you need the Soss.
and Tilde.
, since Normal
and Bernoulli
are from MeasureTheory
Awesome!
Below I create two logistic regression tilde models,
full_model
andhat_model
. Note that thehat_model
is stochastic and its gradient is an unbiased estimate of the full model, scaled byK/N
, because each observation is picked with probability K/N. You see that I use a trick for the global parameters which are picked alway (with probability 1) becauseα ~ Tilde.Normal(0,sqrt(N1/K))
gives a log-likelihood which, when scaled together with the data, goes back toα ~ Tilde.Normal(0,1)
.It would be nice to tell
Tilde
to put a weight/factor on the log-likelihood, like this: