Open moezlinlin opened 6 years ago
It's been a while since I've looked at the code, and I don't have the time right now, but yes I think it's a basic character-level matching accuracy. We could probably get more complicated by using some kind of edit-distance based metric. Does that answer your question? For actual numbers, check the README of the project.
The two sentences in the code are used to calculate the accuracy rate:
correct_prediction = tf.equal(tf.to_int32(tf.argmax( output, 2)), true_labels) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
However, I do not understand that the final measured result does not seem to be the accuracy of the formula. It is more like the accuracy of the character in the corresponding position in the formula. Can it be compared with 75% of the original author? Do not know my understanding right?