JuliaTrustworthyAI / LaplaceRedux.jl

Effortless Bayesian Deep Learning through Laplace Approximation for Flux.jl neural networks.
https://juliatrustworthyai.github.io/LaplaceRedux.jl/
MIT License
37 stars 3 forks source link

predict call for regression #100

Open pat-alt opened 1 month ago

pat-alt commented 1 month ago

Double-check if it's reasonable to return glm predictive when calling predict on regression objects (in line with torch convention AFAIK) or if we should instead incorporate observational noise (typically shown on plots)

I'll have a look at this next week but cc @Rockdeldiablo feel free to continue discussion from teams below

pasq-cat commented 1 month ago

mah. if i had a black box and i wanted to have an idea of where it is uncertain i would not omit the contribution of either epistemic and aleatoric uncertainty because they signal 2 different things: the first where the nn lack data ( and a researcher may be interested to know it), the second is the intrinsic stochasticity of the measurements. For example, if you had a gap in the data right in the middle , with only the aleatoric uncertainty the nn will give overconfident predictions to the user and in that case the "trustworthyness" is lost. In instead i use only epistemic uncertainty i will have overconfident measures where there are a lot of data points. the plot is good for humans but if the neural network has to be integrated in a IoT device or a pipeline, the results have to be reported numerically by the predict function.