Closed sunjunlishi closed 8 years ago
It's just a simple feed forward neural network, you can almost think of it as a single hidden layer perceptron with sigmoid activations with an undirected graphical model on top of it. It is not exactly like an HMM as the edge features are undirected and we can have links between nodes that are far away from each other.
It would in theory be possible to apply incremental training on it without the need to retrain it with all the data, although I have not used it like that before.
For training, you're right we're using maximum likelihood (using LBFGS).
Thanks, Tadas
"It would in theory be possible to apply incremental training on it without the need to retrain it with all the data" ,could you realize it, and it Infinite value;
The trained value as Initialization parameter;and Sample will improve a certain part of the value
"Vertex features fk represent the mapping from the input xi to output yi through a single layer neural network and k is the weight vector for a particular",(from your paper )the neural network is RBF neural network? "CCNF is an undirected graphical model " and is it using Hidden Markov Model(HMM)? "gk(yi; yj) = -1/2*S(gk)i;j (yi ).^2" it is not like HMM ? "S(l)i;j =1; |j-i| =1,else 0",alittle like hmm The whole model can be incremental training?every time to Increase the new training data ,All the training data is re training. training Key points, how to understand ,is it maximum likelihood estimation.and it is not like hmm