Closed st-- closed 2 years ago
Merging #70 (9525788) into master (932f516) will increase coverage by
0.89%
. The diff coverage is95.00%
.
@@ Coverage Diff @@
## master #70 +/- ##
==========================================
+ Coverage 94.87% 95.76% +0.89%
==========================================
Files 9 10 +1
Lines 78 118 +40
==========================================
+ Hits 74 113 +39
- Misses 4 5 +1
Impacted Files | Coverage Δ | |
---|---|---|
src/expectations.jl | 92.00% <92.00%> (ø) |
|
src/likelihoods/exponential.jl | 100.00% <100.00%> (ø) |
|
src/likelihoods/gamma.jl | 100.00% <100.00%> (ø) |
|
src/likelihoods/gaussian.jl | 100.00% <100.00%> (+9.09%) |
:arrow_up: |
src/likelihoods/poisson.jl | 100.00% <100.00%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 932f516...9525788. Read the comment docs.
Open question from #42:
expected_loglik
takes a Vectory
, a Vectorq_f
, and alik
function that maps from scalarf
to UnivariateDistribution;expected_loglik
then does the broadcasting internally. Is that what we want?
- Would it be better (cleaner) to have
expected_loglik
handle scalars only (but this might result in some performance loss for e.g. recomputing the Gauss-Hermite points and weights for each data point)?- Should we expect
lik
to take the full Vectorfs
and return e.g. a Product() of UnivariateDistributions (though this might make the expectation code more complicated)?- How would we want to handle the heteroskedastic case, where we do want to include the correlations across the two outputs at each data point, but independence between different data points (not sure how we would handle that on the AbstractGPs side given that multi-output is all "output as input")?
Do we need to resolve that now, or should we merge it in this form and consider this later?
Do we need to resolve that now, or should we merge it in this form and consider this later?
Later. I'm not keen to hold this PR up. These points all require more design discussion and are part of larger problems. In my view they're best dealt with in the next breaking release.
See #42 - ~should discuss the design while we're at it!~ any further design discussions moved to #73
Main changes to what's currently in ApproximateGPs:
expected_loglikelihood
to be more explicitquadrature
,lik
,q_f
,y
DefaultExpectationMethod
,AnalyticExpectation
,GaussHermiteExpectation
,MonteCarloExpectation
DefaultExpectationMethod
)Note: bumps julia compat to 1.6.