v1.0.1a, 8/26/2014 --
(1) Distribution inertia added for all distributions. This means that
parameters for the distributions are updated to be
inertia_prior_parameters + (1-inertia)_new_parameters. This can be
triggered using distribution_inertia in the
Model.train method. #27 closed.
(2) Summary statistics implemented for Baum-Welch training. This
involves summarizing one sequence at a time and storing
the appropriate summary statistics to the distribution object using
Distribution.summarize( items, weights ), and then
updating the parameters using these summary statistics and an
appropriate inertia with Distribution.from_summarize( inertia=0.0 ). This makes training on large numbers of sequences always
faster, but training on small numbers of long
sequences take slightly longer. #34 closed.
(3) Nosetests significantly expanded to include more tests. This
includes a new file test_training.py, which has a
large number of tests on the Viterbi and Baum-Welch training algorithms,
with different parameters. This also includes
expanded testing of each distribution, and each of the training
functions for that sample.
v1.0.1a, 8/26/2014 -- (1) Distribution inertia added for all distributions. This means that parameters for the distributions are updated to be inertia_prior_parameters + (1-inertia)_new_parameters. This can be triggered using
distribution_inertia
in theModel.train
method. #27 closed.(2) Summary statistics implemented for Baum-Welch training. This involves summarizing one sequence at a time and storing the appropriate summary statistics to the distribution object using
Distribution.summarize( items, weights )
, and then updating the parameters using these summary statistics and an appropriate inertia withDistribution.from_summarize( inertia=0.0 )
. This makes training on large numbers of sequences always faster, but training on small numbers of long sequences take slightly longer. #34 closed.(3) Nosetests significantly expanded to include more tests. This includes a new file
test_training.py
, which has a large number of tests on the Viterbi and Baum-Welch training algorithms, with different parameters. This also includes expanded testing of each distribution, and each of the training functions for that sample.