PyHGF is a Python library for creating and manipulating dynamic probabilistic networks for predictive coding. These networks approximate Bayesian inference by optimizing beliefs through the diffusion of predictions and precision-weighted prediction errors. The network structure remains flexible during message-passing steps, allowing for dynamic adjustments. They can be used as a biologically plausible cognitive model in computational neuroscience or as a generalization of Bayesian filtering for designing efficient, modular decision-making agents. The default implementation supports the generalized Hierarchical Gaussian Filters (gHGF, Weber et al., 2024), but the framework is designed to be adaptable to other algorithms. Built on top of JAX, the core functions are differentiable and JIT-compiled where applicable. The library is optimized for modularity and ease of use, allowing seamless integration with other libraries in the ecosystem for Bayesian inference and optimization. Additionally, a binding with an implementation in Rust is under active development, which will further enhance flexibility during inference. You can find the method paper describing the toolbox here and the method paper describing the gHGF, which is the main framework currently supported by the toolbox here.
The last official release can be downloaded from PIP:
pip install pyhgf
The current version under development can be installed from the master branch of the GitHub folder:
pip install “git+https://github.com/ilabcode/pyhgf.git”
Dynamic networks can be defined as a tuple containing the following variables:
You can find a deeper introduction to how to create and manipulate networks under the following link:
Generalized Hierarchical Gaussian Filters (gHGF) are specific instances of dynamic networks where node encodes a Gaussian distribution that can inherit its value (mean) and volatility (variance) from other nodes. The presentation of a new observation at the lowest level of the hierarchy (i.e., the input node) triggers a recursive update of the nodes' belief (i.e., posterior distribution) through top-down predictions and bottom-up precision-weighted prediction errors. The resulting probabilistic network operates as a Bayesian filter, and a response function can parametrize actions/decisions given the current beliefs. By comparing those behaviours with actual outcomes, a surprise function can be optimized over a set of free parameters. The Hierarchical Gaussian Filter for binary and continuous inputs was first described in Mathys et al. (2011, 2014), and later implemented in the Matlab HGF Toolbox (part of TAPAS (Frässle et al. 2021).
You can find a deeper introduction on how does the gHGF works under the following link:
Here we demonstrate how to fit forwards a two-level binary Hierarchical Gaussian filter. The input time series are binary observations using an associative learning task Iglesias et al. (2013).
from pyhgf.model import Network
from pyhgf import load_data
# Load time series example data (observations, decisions)
u, y = load_data("binary")
# Create a two-level binary HGF from scratch
hgf = (
Network()
.add_nodes(kind="binary-state")
.add_nodes(kind="continuous-state", value_children=0)
)
# add new observations
hgf.input_data(input_data=u)
# visualization of the belief trajectories
hgf.plot_trajectories();
from pyhgf.response import binary_softmax_inverse_temperature
# compute the model's surprise (-log(p))
# using the binary softmax with inverse temperature as the response model
surprise = hgf.surprise(
response_function=binary_softmax_inverse_temperature,
response_function_inputs=y,
response_function_parameters=4.0
)
print(f"Sum of surprises = {surprise.sum()}")
Model's surprise = 138.8992462158203
This implementation of the Hierarchical Gaussian Filter was inspired by the original Matlab HGF Toolbox. A Julia implementation is also available here.