Closed seven-linglx closed 5 years ago
Hi, Did you train on the full dataset? The given tmn_data.txt is just a sampled data (only 5000 data samples) of TagMyNews dataset for testing. Or does the training finished? It is an alternatively training process, and will last until there are no improvement of both components.
Hi, Did you train on the full dataset? The given tmn_data.txt is just a sampled data (only 5000 data samples) of TagMyNews dataset for testing. Or does the training finished? It is an alternatively training process, and will last until there are no improvement of both components.
Yes, i have not used full dataset. THS
@seven-linglx 大佬你好 你是在哪里下载数据集的 http://acube.di.unipi.it/tmn-dataset/ 打不开了
@seven-linglx 大佬你好 你是在哪里下载数据集的 http://acube.di.unipi.it/tmn-dataset/ 打不开了 我当时直接用点自己的数据集, 没用这里提供的数据集了.
Hi, I have trained this model use tmn_data.txt in data directory, but the result is 'val acc 0.8059, f1 0.8058', which is 0.851 in your paper. Is there some change about parameters or train set? i have do nothing when training.
The paper is "Topic Memory Networks for Short Text Classification"
This is the last output message when training:
850/850 [==============================] - 0s 277us/step ntm estimated perplexity upper bound on validation set: 2149.354 No improvement in epoch 436 Epoch 437/800 training cls 106/106 [==============================] - 17s 159ms/step cls train loss: 0.0082 No improvement in epoch 437 with val acc 0.8059, f1 0.8059 Epoch 438/800 training cls 106/106 [==============================] - 18s 168ms/step cls train loss: 0.0087 No improvement in epoch 438 with val acc 0.8082, f1 0.8088 Epoch 439/800 training cls 106/106 [==============================] - 17s 161ms/step cls train loss: 0.0077 No improvement in epoch 439 with val acc 0.7941, f1 0.7956 Epoch 440/800 training cls 106/106 [==============================] - 17s 159ms/step cls train loss: 0.0090 No improvement in epoch 440 with val acc 0.8047, f1 0.8060 Epoch 441/800 training cls 106/106 [==============================] - 17s 163ms/step cls train loss: 0.0077 No improvement in epoch 441 with val acc 0.8000, f1 0.7990 Epoch 442/800 training cls 106/106 [==============================] - 17s 161ms/step cls train loss: 0.0076 No improvement in epoch 442 with val acc 0.8059, f1 0.8058