After 8 hours of training I got my results:
Saving dict for global step 9534: acc = 0.96229786, f1 = 0.8532131, global_step = 9534, loss = 36.943344, precision = 0.8494137$
Saving 'checkpoint_path' summary for global step 9534: results/model/model.ckpt-9534
But when I tried to predict with the same test dataset, the model is not predicting, for example in test we have Bora B-NAME B-NAME and when I tried to pass the same sentence, the algorithm is not predicting that entity. The problem is the algorithm is saying that the accuracy is 0.962297 and f1 0.8532131, but in reality is detecting only 38%, so the accuracy is not 0.85 is only 0.38.
I created my own vocabulary and tags, I ran the code with this parameters:
{ "batch_size": 2, "buffer": 15000, "chars": "vocab.chars.txt", "dim": 300, "dim_chars": 100, "dropout": 0.3, "epochs": 25, "filters": 50, "glove": "glove.npz", "kernel_size": 3, "lstm_size": 100, "num_oov_buckets": 1, "tags": "vocab.tags.txt", "words": "vocab.words.txt" }
After 8 hours of training I got my results: Saving dict for global step 9534: acc = 0.96229786, f1 = 0.8532131, global_step = 9534, loss = 36.943344, precision = 0.8494137$ Saving 'checkpoint_path' summary for global step 9534: results/model/model.ckpt-9534
But when I tried to predict with the same test dataset, the model is not predicting, for example in test we have Bora B-NAME B-NAME and when I tried to pass the same sentence, the algorithm is not predicting that entity. The problem is the algorithm is saying that the accuracy is 0.962297 and f1 0.8532131, but in reality is detecting only 38%, so the accuracy is not 0.85 is only 0.38.
What could be the problem? Do you have an idea?
Thank you!