JuliaTrustworthyAI / LaplaceRedux.jl

Effortless Bayesian Deep Learning through Laplace Approximation for Flux.jl neural networks.
https://www.taija.org/LaplaceRedux.jl/
MIT License
39 stars 3 forks source link

added together the variances so that the results of predicts correspond to the posterior predictive distribution #117

Closed pasq-cat closed 1 month ago

pasq-cat commented 1 month ago

right now it correspond to a maximum likelihood estimate centered around the MAP. The contribution to the variance due to the priors is not added to the variance coming from the likelihood

codecov[bot] commented 1 month ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 96.47%. Comparing base (b638fcb) to head (94040ab).

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #117 +/- ## ======================================= Coverage 96.47% 96.47% ======================================= Files 21 21 Lines 595 596 +1 ======================================= + Hits 574 575 +1 Misses 21 21 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

pasq-cat commented 1 month ago

I think this is confusing the distribution over network outputs p ( f | x , D ) (i.e. the GLM predictive) with the approximate predictive distribution p ( y | x , D ) (see section 2.1 (4) in Daxberger et al). The former does not incorporate observational noise, the latter does.

The information of the weight prior does enter into the GLM predictive, but not the one for the observational noise (it shouldn't).

mmm ok i will leave this issue to you.