Open alxempirical opened 8 years ago
I'm not sure that it's true that models with substantially different numbers of iterations lead to bad results. In fact, it has been suggested that it's not at all unreasonable to intentionally increase the number of models stepwise with iterations to always have a few that are exploring more or less afresh.
That said, this whole interface is likely to change or go away very soon in favor of a more explicit MML interaction, after http://tinyurl.com/probcomp-bql-mml-split
The risk is that someone calls p.analyze(iterations=1000)
, then calls p.analyze(iterations=10)
or something, and then all subsequent results are silently polluted by half the models being badly under-trained.
Using models with different numbers of training iterations might be OK, if you have assessed the convergence rate and have a rough idea how much training new models will need.
Thanks for the pointer to the new doc. Do you mean the Population
interface in general (in which case I should stop dog-fooding it [not that it's dog food]), or just this analyze
interface?
I believe we will continue to have some version of the Population
interface because it's handy for plotting and other utilities, but I know for a fact that .analyze
would be better written explicitly in MML, and perhaps there should be a .quick_analyze
if you don't want to think too hard about it (which is perhaps what this should have been called in the first place).
And indeed the creation and initialization of the GPMs for the population via any metamodels would also be part of what is better done via MML rather than in .initialize
, though again, there might usefully be a version of that in .quick_analyze
.
I think a better default here would be
models=0
. Model creation should be a separate step. Otherwise, it's too easy for someone to runanalyze
twice and end up with models which have been trained for substantially different numbers of iterations, which is likely to lead to bad results.