Open juliagorman opened 6 months ago
Thanks for the question, @juliagorman. The rSLDS inference algorithm has to solve a nonconvex, combinatorial optimization problem with many local optima, so it's not surprising that you would get different answers from one run to the next. One way to mitigate this issue is to use a heuristic to initialize the model. I believe the SSM default initialization is to first estimate the continuous latent states using PCA (or in the case of gaussian_id emissions, just initialize x to the observations), then fit an ARHMM to initialize the discrete states.
Your example may be especially problematic as it appears that the true latents follow a linear dynamical system (equiv, an rSLDS with one discrete state), but you're fitting the rSLDS with K=5 states. I would recommend doing some model selection (e.g., based on held out ELBOs), which would presumably show that K=1 states is best.
Hello,
thank you for your response. I generated this circle dynamics with some noise so I could play with some of the parameters before I tested it on my own data. Cross validation with ELBOS showed me K=7 was best but I will next try using the explained variance you responded to in another question next. Are there any parameters in the future when I use my own data that will help with this problem? Trying to perform parameter sweeps are currently challenging since I can’t tell the difference between initialization being different or the parameter affecting the model. Also I don’t know if this is useful information but I am currently performing dimensionality reduction before fitting the model on the latent trajectories when using my own data . Thank you for the help!
Even if I set my ssm with gaussian_id emissions as follows, I get very different results each time I run the code:
rslds = ssm.SLDS(D_obs, K, D_latent, transitions="recurrent_only", dynamics="gaussian", emissions="gaussian_id", single_subspace=True)
Is there a way to reduce this? Below are two seperate times I ran the same code