I hope this message finds you well. We were finding that the M-step for the InputDrivenObservations class was slow, so we updated it to use explicit gradient and hessian functions. Additionally, I added a 6th section to the 2b Input Driven Observations (GLM-HMM) notebook so as to show a recovery analysis for multinomial GLM data (compared to the Bernoulli GLM case studied in the rest of the notebook). I also made small adjustments to the math at the top of the notebook to more accurately reflect what is going in the InputDrivenObservations class (namely that the weight vector for one class for a given state is fixed to be entirely zeros, so as to ensure identifiability).
In case it's useful, here is the math for the gradient and hessian functions we implemented in the m-step for the InputDrivenObservations class:
Apologies for creating extra work for you; there's no rush to review this. I just thought the speed-up in the M-step might be useful for anyone else using this class.
Hi Scott,
I hope this message finds you well. We were finding that the M-step for the InputDrivenObservations class was slow, so we updated it to use explicit gradient and hessian functions. Additionally, I added a 6th section to the
2b Input Driven Observations (GLM-HMM)
notebook so as to show a recovery analysis for multinomial GLM data (compared to the Bernoulli GLM case studied in the rest of the notebook). I also made small adjustments to the math at the top of the notebook to more accurately reflect what is going in the InputDrivenObservations class (namely that the weight vector for one class for a given state is fixed to be entirely zeros, so as to ensure identifiability).In case it's useful, here is the math for the gradient and hessian functions we implemented in the m-step for the InputDrivenObservations class:
Apologies for creating extra work for you; there's no rush to review this. I just thought the speed-up in the M-step might be useful for anyone else using this class.
Best,
Zoe