Open nipunbatra opened 4 years ago
@nipunbatra sir
I have a few questions
As discussed the color coded diagram for the \epsilon diagram is not adding much value. Hence i didn't add it.
Do we need to add worked out examples for the article now? Or can we add it for a later iteration? The reason being we need to create the color coded latex equations for each of the operations, then convert it to svg and the show in the article.
while defining e_t you forgot to mention the observed sequence.. and you have not yet colour coded. I didn't understand this issue
HMM Evidence Likelihood
Backward algorithm
Optinal sequence of hidden states
The key idea is to store Best score (highest prob) along a single path at time , which accounts for the first observations and ends in .. We can compute the same quantity for the next timestamp by considering all K transitions from each of the states. Then define the quantity delta_t and colour code the above text..
Trellis.pdf
Parameter learning
Replace the above with something like:
We have seen the procedure to calculate the optimal parameters given the hidden state sequence. However, it is common that the hidden state sequence is unknown. In such a case, we first try to "estimate" the "expected" state sequence based on some initial estimates of parameters. Then, we use the principles of MLE for observed state sequence to refine the parameters. We apply these two steps iteratively via an algorithm called Expectation maximization..
[x] Expectation step: what is ESS? Better to say that we can compute the expected state sequence. But, from our earlier MLE computations, we know that we only care about the number of transitions from i to j and number of emissions from state i..thus, we compute e_t..which is the ESS..
[ ] while defining e_t you forgot to mention the observed seqwuence.. and you have not yet colour coded.