Closed SchroederAdrian closed 2 weeks ago
Weird... Please upload a MWE (minimal working example), so that I can try to see what is going on my end. (The first box is not a MWE, I don't know how you defined LinStSp). If you can make it as small as possible, that'd be great.
Ah yes apologies, here the parameters and the model, essentially a multivariate linear Gaussian model with transform matrix lambda, VAR lag S and some parameter restrictions.
import numpy as np
import particles
from particles import distributions as dists
from particles import state_space_models as ssms
Q=2 # dimension latent variable
N=100 # dimension series
R=1000 # number particles
T = 50 # time periods
def generate_S(Q):
if Q > 1:
s = np.random.uniform(-0.9, 0.9, size=Q)
s = np.array(s / np.sum(np.abs(s)), ndmin=2)
S = s.T @ s
return S
else:
return np.array(np.random.uniform(-0.8, 0.8), ndmin=2)
monte_S = generate_S(Q)
def construct_lambda(N, Q):
# constructs a coefficient matrix satisfying the identification restrictions
eigenvalues = np.random.uniform(low=2.0, high=10.0, size=Q) # distinct, non-zero eigenvalues
Lambda = np.diag(eigenvalues)
U, _ = np.linalg.qr(np.random.normal(size=(N, Q)))
A = U @ np.sqrt(Lambda)
return A
monte_lambda = construct_lambda(N, Q)
monte_f0 = np.zeros((Q,1)) # initial value
class LinStSp(ssms.StateSpaceModel):
default_params = {'lambda_i': np.random.normal(size=(N,Q)), 'S': np.eye(Q), 'ini_fac': np.zeros(Q)}
def PX0(self):
return dists.MvNormal(loc = self.ini_fac.T , cov=np.eye(Q))
def PX(self, t, xp):
mean = self.S @ xp.T # will be Q x R
return dists.MvNormal(loc = mean.T, cov=np.eye(Q)) # and transpose then R x Q
def PY(self, t, xp, x):
meanY = self.lambda_i @ x.T # + self.beta # will be N x R
return dists.MvNormal(loc = meanY.T, cov=np.eye(N))
# simulate data
model_fil = LinStSp(lambda_i = monte_lambda, S= monte_S, ini_fac = monte_f0)
x_monte, y_monte = model_fil.simulate(T+200)
x_monte, y_monte = x_monte[200:], y_monte[200:]
from particles import variance_estimators as var
fk = ssms.Bootstrap(ssm=LinStSp(lambda_i=monte_lambda, S=monte_S, ini_fac=monte_f0), data=y_monte)
pf = particles.SMC(fk=fk, N=R, qmc=False, store_history=True, resampling='systematic', collect=[particles.collectors.Moments(), var.Var()])
pf.run()
# paths = pf.hist.X
paths = pf.hist.backward_sampling_mcmc(M=R)
The scary message from Numba was actually hiding a simple bug, which triggered an error only when function $\varphi$ is such that $\varphi(x)$ is a vector (instead of a scalar); in your case $\varphi(x)=x$ (default function), and your $x$ is of vector of dimension 2. It's now fixed in the experimental branch. I'll run more tests when I'm back from holiday, before pushing the fix to the master branch. Please switch to experimental in the meantime. And thanks a lot for spotting this and reporting it here.
Ah that's what it was I already suspected the Numba in my virtual env. Thanks so much for looking into this so quickly, appreciate it. And enjoy the rest of your holiday!
Hello Professor Chopin,
Thanks for the excellent package and your hard work! Your write up of the package and ease of modification is incredible.
I ran into a bug when I was trying to employ the variance estimator, namely when calling
It appears that Var() creates a typing error. I tried with or without storing history and partial history and without the second collector but the following error persists:
I'm on an M2 Mac with the new arm64 architecture and use Python 3.9 on VS Code, not sure if it's a software issue on my end?
Thank you! Adrian