Currently (in its v1 state) the EM optimizer makes use of a number of numerical routines (for calculating $\xi$ and $\gamma$ matrices) in each iteration. While these are vectorized, they are not yet in a state where they can be numba-optimized. For instance, np.vstack is not supported by numba, so that would need to be refactored to a np.repeat call, also I haven't thought much about whether we need to worry/factor in contiguous array memory allocation (i.e. is there any matrix multiplication explicitly.) This should be done to speed up the BW algorithm iterations.
Currently (in its v1 state) the EM optimizer makes use of a number of numerical routines (for calculating $\xi$ and $\gamma$ matrices) in each iteration. While these are vectorized, they are not yet in a state where they can be numba-optimized. For instance,
np.vstack
is not supported bynumba
, so that would need to be refactored to anp.repeat
call, also I haven't thought much about whether we need to worry/factor in contiguous array memory allocation (i.e. is there any matrix multiplication explicitly.) This should be done to speed up the BW algorithm iterations.