Justinezgh / SBI-Diff-Simulator

MIT License
0 stars 0 forks source link

Gradients stochasticity study #38

Open Justinezgh opened 1 year ago

Justinezgh commented 1 year ago

Our goal is to show that the noisier the gradients, the less they help constrain the distribution. To study this, I created a toy simulator. The creation of the simulator is detailed in this notebook, but briefly, the simulator is built so that the gradients are increasingly noisy with the number of latent variables z. The SBI algorithm used here is NPE. The metric used to evaluate the posterior quality (compare to the truth obtained with mcmc) is the c2st metric.

For this experiment I considered two amount of latent variables: 20 and 500. The first part of the experiment was to find the optimal score weight for these two cases: image image I concluded that the optimal score weight for the case with 500 latent variables is 1e-3 and 1e-2 for 20 latent variables. So now we can compare our two cases: image From this plot we can see that in the case of 500 latent variables (so the gradients are noisier than the 20 latent variables case) the quality evaluation curve is above the 20 latent variables one. I think this shows that the noisier the gradients, the less they help constrain the distribution.

(epistemic uncertainty is obtained by averaging 10 NFs)

Justinezgh commented 1 year ago

Hi @glouppe I did the experiment again but with different training set for each NF and I got these results:

image image

So it seems that it is not related to the training set. What do you think?