any insight re: why only a few processes run simultaneously?
Low nit to get more seeds: Juniper?
Does running a bunch of (simple) models with ~100 iterations, with many different seeds, seems like a reliable way to explore the LDA + TS space simultaneously?
Fewer chunks
Alternatively, we could pull out fewer years, or pull out larger chunks. Is there a preference from the TS culture?
Fun results to look at
BBS data: What if there really are no changepoints detectable, and the best strategy is to fit the most-efficient-possible mean for everyone?
Right now to visualize any instance of a model configuration, you have to choose a year (to be left out) and a sim. Does using the mean abundance predicted across all the models + all the sims seem reasonable to capture the variation?