nipunbatra / hmm

https://nipunbatra.github.io/hmm/
Creative Commons Attribution 4.0 International
4 stars 0 forks source link

Feedback May 11 #6

Open nipunbatra opened 4 years ago

nipunbatra commented 4 years ago

HMM Evidence Likelihood

Backward algorithm

Optinal sequence of hidden states

The key idea is to store Best score (highest prob) along a single path at time , which accounts for the first observations and ends in .. We can compute the same quantity for the next timestamp by considering all K transitions from each of the states. Then define the quantity delta_t and colour code the above text..

Trellis.pdf

Parameter learning

Replace the above with something like:

We have seen the procedure to calculate the optimal parameters given the hidden state sequence. However, it is common that the hidden state sequence is unknown. In such a case, we first try to "estimate" the "expected" state sequence based on some initial estimates of parameters. Then, we use the principles of MLE for observed state sequence to refine the parameters. We apply these two steps iteratively via an algorithm called Expectation maximization..

Rithwikksvr commented 4 years ago

@nipunbatra sir

I have a few questions

  1. As discussed the color coded diagram for the \epsilon diagram is not adding much value. Hence i didn't add it.

  2. Do we need to add worked out examples for the article now? Or can we add it for a later iteration? The reason being we need to create the color coded latex equations for each of the operations, then convert it to svg and the show in the article.

  3. while defining e_t you forgot to mention the observed sequence.. and you have not yet colour coded. I didn't understand this issue