pymc-devs / pymc-examples

Examples of PyMC models, including a library of Jupyter notebooks.
https://www.pymc.io/projects/examples/en/latest/
MIT License
263 stars 218 forks source link

re-execute blackbox numpy notebook #496

Closed OriolAbril closed 1 year ago

OriolAbril commented 1 year ago

re-execute blackbox numpy notebook, closes #268

review-notebook-app[bot] commented 1 year ago

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

OriolAbril commented 1 year ago

cc @ricardoV94, not sure what is going on with the gradient comparison at the bottom, I assume I am computing a different quantity or there are normalization terms at play but I can't really tell from the existing docs.

ricardoV94 commented 1 year ago

@OriolAbril the difference is that the gradient was being requested wrt to the untransformed parameters, which is something that is no longer possible in V>4.0 (see related discussion here: https://github.com/pymc-devs/pymc/issues/5443)

To obtain the values as was being done before you can run this snippet:

# test the gradient that PyMC uses for the Normal log likelihood
with pm.Model() as test_model:
    m = pm.Uniform("m", lower=-10.0, upper=10.0, transform=None)
    c = pm.Uniform("c", lower=-10.0, upper=10.0, transform=None)
    pm.Normal("likelihood", mu=(m * x + c), sigma=sigma, observed=data)

gradfunc = test_model.compile_dlogp([m, c])
grad_vals_pymc = gradfunc({"m": mtrue, "c": ctrue})
print(f'Gradient returned by PyMC "Normal" distribution: {grad_vals_pymc}') 
Gradient returned by PyMC "Normal" distribution: [8.32628482 2.02949635]

However I think for teaching purposes would be better to compare the model using the custom Op with the comparison one directly like this:

ip = pymodel.initial_point()
print(f"Evaluating dlogp of model at point {ip}")
grad_vals_custom = pymodel.compile_dlogp()(ip)

# Compare withe the gradient that PyMC uses for the Normal log likelihood
with pm.Model() as test_model:
    m = pm.Uniform("m", lower=-10.0, upper=10.0)
    c = pm.Uniform("c", lower=-10.0, upper=10.0)
    pm.Normal("likelihood", mu=(m * x + c), sigma=sigma, observed=data)
grad_vals_pymc = test_model.compile_dlogp()(ip)

print(f'Gradient of model using a custom "LogLikeWithGrad": {grad_vals_custom}') 
print(f'Gradient of model using a PyMC "Normal" distribution: {grad_vals_pymc}') 
Evaluating dlogp of model at point {'m_interval__': array(0.), 'c_interval__': array(0.)}
Gradient of model using a custom "LogLikeWithGrad": [1286.63142412  250.14748176]
Gradient of model using a PyMC "Normal" distribution: [1286.63142412  250.14748176]