Open hqzizania opened 8 years ago
Sorry, I'm not a core contributor and I don't have any special knowledge of the code or its algorithms. The only difference I see between those two methods is the use of rpath/rnode vs. lpath/lnode. I assume lpath goes backward and rpath goes forward, giving you your forward-backward algorithm. Can you give any details from the paper about what's different here?
Thanks for your reply.
The calcAlpha is consistent with the formula but the calcbeta() code should be:
void Node::calcBeta() {
beta = 0.0;
for (const_Path_iterator it = rpath.begin(); it != rpath.end(); ++it) {
beta = logsumexp(beta,
(*it)->cost +(*it)->rnode->beta + (*it)->rnode->cost,
(it == rpath.begin()));
}
}
I have no idea. I haven't read the paper. My two suggestions would be to 1) try changing the code and seeing if it still works, and 2) ask this question on stats.stackexchange.com/ and attach the [conditional-random-field] tag.
I changed the code as my understanding but its results are wrong. So I think its implementation is right definitely. I just wonder why it is different from the paper, or if there is another paper for the details.
Anyway, Thanks so much~ I will try to ask it on stats.stackexchange.com
Hi I've posted an answer to your question here http://stats.stackexchange.com/a/240610/95569 , hope it's not too late :)
Now I see, Dontloo, thank you so much!
You are welcome :)
@Dontloo Since the beta is actually beta', and it should subtract the cost in calcExpectation. But one more question I have is why it does not subtract the cost in calcExpectation as the beta is also beta' in that place.
Z is also using Beta', so Z is larger than the really Z_ ??
@hqzizania that's because c = std::exp(lnode->alpha + path.cost + rnode->cost + rnode->beta -rnode->cost - Z) path->cost means the transfer possibility. rnode->cost means the states possibility.
I’m a beginner of CRF++. A question have plagued me for many days. I’m exhausted~
In void Node::calcBeta() {
Why the code is so like calcAlpha()
You know they two are very different in original forward-backward algorithm of HMM.
@garfieldnate