Closed matthewcarbone closed 1 year ago
Merging #50 (f642573) into main (a8e46a0) will increase coverage by
0.02%
. The diff coverage isn/a
.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
@@ Coverage Diff @@
## main #50 +/- ##
==========================================
+ Coverage 95.98% 96.01% +0.02%
==========================================
Files 44 44
Lines 3541 3541
==========================================
+ Hits 3399 3400 +1
+ Misses 142 141 -1
Flag | Coverage Δ | |
---|---|---|
unittests | 96.01% <ø> (+0.02%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
see 1 file with indirect coverage changes
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Thanks! I feel it could benefit a bit from a discussion on when one should use a fully Bayesian (HMC/NUTS) approach vs variational inference approximation. In my experience,
HMC/NUTS: Sensitivity to Priors: This can be perceived as a strength or a weakness, depending on the context. However, many researchers appreciate it because it offers a more intuitive grasp of the model. Reliable Uncertainty Estimates: Offers robust evaluations of uncertainties as it directly samples from the posterior. The variational methods are known to lead to underestimation of uncertainties. Integration with Classical Bayesian Models: This is particularly evident when you consider the combination of Gaussian Processes with traditional Bayesian models, as demonstrated in structured GP and hypothesis learning. Comprehensive Convergence Diagnostics: Indicators such as n_eff, r_hat, and acc_prob for each inferred parameter. Speed Limitations: One of the primary drawbacks is its computational speed.
SVI: Efficiency: It's significantly faster and is memory-efficient (performs equally well with 32-bit precision) Acceptable Trade-offs: For many real-world tasks, the slight decrease in the accuracy of predictive uncertainty estimates is either negligible or acceptable. Convergence Indicator Limitations: The loss may not be a very good indicator of convergence - can easily overshoot or undershoot.
Would you like me to add this to the notebook or perhaps it belongs in a README/docs somewhere? I am definitely not an authority on these matters but I can try to explain.
I would add it to the notebook for now. I plan to add some general advice to the README later on or maybe make a separate tutorial.
Ok will do. I'll add your text to the notebook a little later and will push the changes. I can also use this PR to run the smoke tests on all of the notebooks. Stay tuned.
Sounds good - looking forward to it
I've created a simple example for people to compare the results of an
ExactGP
and aviGP
. Nothing too fancy here.