@OzanCatalVerses pointed out that norm_dist within the pymdp.utils module strangely treats the case of 3-tensors differently than other numbers of dimensions, see here and code below.
def norm_dist(dist):
""" Normalizes a Categorical probability distribution (or set of them) assuming sufficient statistics are stored in leading dimension"""
if dist.ndim == 3:
new_dist = np.zeros_like(dist)
for c in range(dist.shape[2]):
new_dist[:, :, c] = np.divide(dist[:, :, c], dist[:, :, c].sum(axis=0))
return new_dist
else:
return np.divide(dist, dist.sum(axis=0))
Will correct this to:
def norm_dist(dist):
""" Normalizes a Categorical probability distribution (or set of them) assuming sufficient statistics are stored in leading dimension"""
return np.divide(dist, dist.sum(axis=0))
@OzanCatalVerses pointed out that
norm_dist
within thepymdp.utils
module strangely treats the case of 3-tensors differently than other numbers of dimensions, see here and code below.Will correct this to: