Closed jengelman closed 7 years ago
In discussion with the sklearn devs, I realized that their mixture model needs a bit more work for full online inference, and since I'm going to open a PR there, I might as well do the same thing here for the HMM models and others. The warm_start parameter isn't really related to online inference, just optimization speed, so I'm closing issue.
Some of the scikit-learn bayesian models include a warm_start parameter, where results of previous fit() calls are used as priors for the next fit. This would speed up any repeated or online inference without needing to resort to full SVI.
For example, in the mixture module it is implemented by skipping new initializations as follows:
do_init = not(self.warm_start and hasattr(self, 'converged_'))
(code from the mixture base class)