Closed ShawnTing1 closed 2 years ago
Your command looks right to me. We believe that the issue come from the fact that the tokenizations have been changed. See https://github.com/YujiaBao/Distributional-Signatures/issues/32
Thank you very much for your quick reply, I see the reason for the problem.
I ran 5-way 1-shot and 5-way 5-shot classification on HuffPost and FewRel using BERT, embedding=proto. However, the obtained results are quite different from those in Table 2. Is this a common problem now? If possible, can you share the commands of the two datasets when embedding=proto and bert=true? I would like to know what went wrong causing this result.
`Parameters: AUXILIARY=[] BERT=True BERT_CACHE_DIR=~/.pytorch_pretrained_bert/ CLASSIFIER=proto CLIP_GRAD=None CNN_FILTER_SIZES=[3, 4, 5] CNN_NUM_FILTERS=50 CUDA=0 DATA_PATH=data/huffpost_bert_uncase.json DATASET=huffpost DROPOUT=0.1 EMBEDDING=cnn FINETUNE_EBD=False FINETUNE_EPISODES=10 FINETUNE_LOSS_TYPE=softmax FINETUNE_MAXEPOCHS=5000 FINETUNE_SPLIT=0.8 INDUCT_ATT_DIM=64 INDUCT_HIDDEN_DIM=100 INDUCT_ITER=3 INDUCT_RNN_DIM=128 LR=0.001 LRD2_NUM_ITERS=5 MAML=False MODE=train N_TEST_CLASS=16 N_TRAIN_CLASS=20 N_VAL_CLASS=5 N_WORKERS=10 NOTQDM=False PATIENCE=20 PRETRAINED_BERT=bert-base-uncased PROTO_HIDDEN=[300, 300] QUERY=25 RESULT_PATH= SAVE=False SEED=330 SHOT=1 SNAPSHOT= TEST_EPISODES=1000 TRAIN_EPISODES=100 TRAIN_EPOCHS=1000 VAL_EPISODES=100 WAY=5 WORD_VECTOR=wiki.en.vec WV_PATH=./
22/09/30 08:49:09: Loading data 22/09/30 08:49:09: Class balance: {19: 900, 4: 900, 5: 900, 8: 900, 1: 900, 13: 900, 31: 900, 16: 900, 36: 900, 39: 900, 14: 900, 11: 900, 23: 900, 17: 900, 7: 900, 21: 900, 26: 900, 12: 900, 18: 900, 37: 900, 6: 900, 22: 900, 40: 900, 15: 900, 29: 900, 10: 900, 35: 900, 38: 900, 9: 900, 25: 900, 30: 900, 20: 900, 3: 900, 27: 900, 24: 900, 34: 900, 33: 900, 32: 900, 0: 900, 2: 900, 28: 900} 22/09/30 08:49:09: Avg len: 13.077235772357724 22/09/30 08:49:09: Loading word vectors 22/09/30 08:49:15: Total num. of words: 9376, word vector dimension: 300 22/09/30 08:49:15: Num. of out-of-vocabulary words(they are initialized to zeros): 1586 22/09/30 08:49:15: #train 18000, #val 4500, #test 14400 22/09/30 08:49:18, Building embedding 22/09/30 08:49:18, Loading pretrained bert Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias']
22/09/30 08:57:27, ep 0, val acc: 0.2714 ± 0.0634, train stats ebd_grad: 1.9589, clf_grad: 0.9190 22/09/30 08:57:27, Save cur best model to /work/home/yanling_1/experiments/dingshuo/recurrent/Distributional-Signatures-master/src/tmp-runs/16644989957515234/0 22/09/30 08:58:42, ep 1, val acc: 0.2698 ± 0.0622, train stats ebd_grad: 2.0828, clf_grad: 1.0325 22/09/30 08:59:16, ep 2, val acc: 0.2676 ± 0.0699, train stats ebd_grad: 2.0500, clf_grad: 1.1261 22/09/30 08:59:48, ep 3, val acc: 0.2974 ± 0.0766, train stats ebd_grad: 1.9573, clf_grad: 1.1916 22/09/30 08:59:48, Save cur best model to /work/home/yanling_1/experiments/dingshuo/recurrent/Distributional-Signatures-master/src/tmp-runs/16644989957515234/3 22/09/30 09:00:28, ep 4, val acc: 0.2914 ± 0.0696, train stats ebd_grad: 2.0284, clf_grad: 1.1392 22/09/30 09:01:01, ep 5, val acc: 0.2679 ± 0.0668, train stats ebd_grad: 2.0021, clf_grad: 1.1489 22/09/30 09:01:35, ep 6, val acc: 0.2734 ± 0.0568, train stats ebd_grad: 1.9017, clf_grad: 1.2707 22/09/30 09:02:08, ep 7, val acc: 0.2983 ± 0.0753, train stats ebd_grad: 1.9184, clf_grad: 1.2527 22/09/30 09:02:08, Save cur best model to /work/home/yanling_1/experiments/dingshuo/recurrent/Distributional-Signatures-master/src/tmp-runs/16644989957515234/7 22/09/30 09:02:44, ep 8, val acc: 0.2695 ± 0.0675, train stats ebd_grad: 2.1273, clf_grad: 1.3529 22/09/30 09:03:18, ep 9, val acc: 0.2798 ± 0.0586, train stats ebd_grad: 1.8675, clf_grad: 1.2499 22/09/30 09:03:50, ep 10, train acc: 0.5097 ± 0.0868
22/09/30 09:04:05, ep 10, val acc: 0.2934 ± 0.0724, train stats ebd_grad: 1.8899, clf_grad: 1.2466 22/09/30 09:04:39, ep 11, val acc: 0.2918 ± 0.0671, train stats ebd_grad: 1.9833, clf_grad: 1.3883 22/09/30 09:05:12, ep 12, val acc: 0.2914 ± 0.0594, train stats ebd_grad: 1.8807, clf_grad: 1.3514 22/09/30 09:05:45, ep 13, val acc: 0.3014 ± 0.0702, train stats ebd_grad: 1.9133, clf_grad: 1.3311 22/09/30 09:05:45, Save cur best model to /work/home/yanling_1/experiments/dingshuo/recurrent/Distributional-Signatures-master/src/tmp-runs/16644989957515234/13 22/09/30 09:06:23, ep 14, val acc: 0.2865 ± 0.0739, train stats ebd_grad: 1.8475, clf_grad: 1.3508 22/09/30 09:06:56, ep 15, val acc: 0.2998 ± 0.0621, train stats ebd_grad: 1.8047, clf_grad: 1.3193 22/09/30 09:07:30, ep 16, val acc: 0.2822 ± 0.0617, train stats ebd_grad: 1.9600, clf_grad: 1.4966 22/09/30 09:08:03, ep 17, val acc: 0.2902 ± 0.0709, train stats ebd_grad: 1.8314, clf_grad: 1.3447 22/09/30 09:08:36, ep 18, val acc: 0.3108 ± 0.0674, train stats ebd_grad: 1.8355, clf_grad: 1.4554 22/09/30 09:08:36, Save cur best model to /work/home/yanling_1/experiments/dingshuo/recurrent/Distributional-Signatures-master/src/tmp-runs/16644989957515234/18 22/09/30 09:09:11, ep 19, val acc: 0.3003 ± 0.0697, train stats ebd_grad: 1.7887, clf_grad: 1.5113 22/09/30 09:09:44, ep 20, train acc: 0.5346 ± 0.1068
22/09/30 09:09:58, ep 20, val acc: 0.2990 ± 0.0643, train stats ebd_grad: 1.8295, clf_grad: 1.3671 22/09/30 09:10:32, ep 21, val acc: 0.3064 ± 0.0760, train stats ebd_grad: 1.7967, clf_grad: 1.3695 22/09/30 09:11:06, ep 22, val acc: 0.3014 ± 0.0685, train stats ebd_grad: 1.7788, clf_grad: 1.3475 22/09/30 09:11:38, ep 23, val acc: 0.2991 ± 0.0750, train stats ebd_grad: 1.9317, clf_grad: 1.3862 22/09/30 09:12:11, ep 24, val acc: 0.3006 ± 0.0658, train stats ebd_grad: 1.8795, clf_grad: 1.4360 22/09/30 09:12:45, ep 25, val acc: 0.2970 ± 0.0669, train stats ebd_grad: 1.8233, clf_grad: 1.5418 22/09/30 09:13:17, ep 26, val acc: 0.3048 ± 0.0665, train stats ebd_grad: 1.7199, clf_grad: 1.4293 22/09/30 09:13:55, ep 27, val acc: 0.2917 ± 0.0798, train stats ebd_grad: 1.8592, clf_grad: 1.5341 22/09/30 09:14:28, ep 28, val acc: 0.2974 ± 0.0720, train stats ebd_grad: 1.8037, clf_grad: 1.5600 22/09/30 09:15:01, ep 29, val acc: 0.2789 ± 0.0632, train stats ebd_grad: 1.8537, clf_grad: 1.4444 22/09/30 09:15:35, ep 30, train acc: 0.5600 ± 0.1043
22/09/30 09:15:49, ep 30, val acc: 0.3014 ± 0.0681, train stats ebd_grad: 1.8087, clf_grad: 1.4938 22/09/30 09:16:23, ep 31, val acc: 0.3167 ± 0.0844, train stats ebd_grad: 1.9157, clf_grad: 1.5915 22/09/30 09:16:23, Save cur best model to /work/home/yanling_1/experiments/dingshuo/recurrent/Distributional-Signatures-master/src/tmp-runs/16644989957515234/31 22/09/30 09:17:02, ep 32, val acc: 0.3246 ± 0.0780, train stats ebd_grad: 1.8063, clf_grad: 1.6246 22/09/30 09:17:02, Save cur best model to /work/home/yanling_1/experiments/dingshuo/recurrent/Distributional-Signatures-master/src/tmp-runs/16644989957515234/32 22/09/30 09:17:37, ep 33, val acc: 0.3207 ± 0.0732, train stats ebd_grad: 1.7146, clf_grad: 1.4753 22/09/30 09:18:11, ep 34, val acc: 0.2996 ± 0.0670, train stats ebd_grad: 1.8184, clf_grad: 1.5294 22/09/30 09:18:43, ep 35, val acc: 0.3086 ± 0.0701, train stats ebd_grad: 1.7069, clf_grad: 1.4476 22/09/30 09:19:17, ep 36, val acc: 0.2926 ± 0.0648, train stats ebd_grad: 1.8122, clf_grad: 1.4456 22/09/30 09:19:50, ep 37, val acc: 0.3006 ± 0.0678, train stats ebd_grad: 1.7452, clf_grad: 1.5329 22/09/30 09:20:23, ep 38, val acc: 0.3358 ± 0.0666, train stats ebd_grad: 1.7331, clf_grad: 1.4756 22/09/30 09:20:23, Save cur best model to /work/home/yanling_1/experiments/dingshuo/recurrent/Distributional-Signatures-master/src/tmp-runs/16644989957515234/38 22/09/30 09:20:58, ep 39, val acc: 0.2932 ± 0.0636, train stats ebd_grad: 1.7973, clf_grad: 1.6944 22/09/30 09:21:31, ep 40, train acc: 0.5589 ± 0.1190
22/09/30 09:21:46, ep 40, val acc: 0.2854 ± 0.0679, train stats ebd_grad: 1.8996, clf_grad: 1.7858 22/09/30 09:22:19, ep 41, val acc: 0.3073 ± 0.0719, train stats ebd_grad: 1.7938, clf_grad: 1.6191 22/09/30 09:22:51, ep 42, val acc: 0.3002 ± 0.0802, train stats ebd_grad: 1.6891, clf_grad: 1.5818 22/09/30 09:23:24, ep 43, val acc: 0.2862 ± 0.0643, train stats ebd_grad: 1.8131, clf_grad: 1.6790 22/09/30 09:23:58, ep 44, val acc: 0.3187 ± 0.0674, train stats ebd_grad: 1.6642, clf_grad: 1.5563 22/09/30 09:24:30, ep 45, val acc: 0.3042 ± 0.0756, train stats ebd_grad: 1.7754, clf_grad: 1.6751 22/09/30 09:25:04, ep 46, val acc: 0.3151 ± 0.0712, train stats ebd_grad: 1.7468, clf_grad: 1.6247 22/09/30 09:25:37, ep 47, val acc: 0.2917 ± 0.0763, train stats ebd_grad: 1.7025, clf_grad: 1.6211 22/09/30 09:26:10, ep 48, val acc: 0.2995 ± 0.0713, train stats ebd_grad: 1.7485, clf_grad: 1.5730 22/09/30 09:26:44, ep 49, val acc: 0.3090 ± 0.0742, train stats ebd_grad: 1.7462, clf_grad: 1.5936 22/09/30 09:27:18, ep 50, train acc: 0.5898 ± 0.1078
22/09/30 09:27:32, ep 50, val acc: 0.3054 ± 0.0727, train stats ebd_grad: 1.7511, clf_grad: 1.6006 22/09/30 09:28:06, ep 51, val acc: 0.3162 ± 0.0735, train stats ebd_grad: 1.6522, clf_grad: 1.5482 22/09/30 09:28:39, ep 52, val acc: 0.3225 ± 0.0674, train stats ebd_grad: 1.6537, clf_grad: 1.6075 22/09/30 09:29:11, ep 53, val acc: 0.3195 ± 0.0817, train stats ebd_grad: 1.7523, clf_grad: 1.7846 22/09/30 09:29:45, ep 54, val acc: 0.2998 ± 0.0622, train stats ebd_grad: 1.7747, clf_grad: 1.6322 22/09/30 09:30:18, ep 55, val acc: 0.3150 ± 0.0731, train stats ebd_grad: 1.7161, clf_grad: 1.8001 22/09/30 09:30:51, ep 56, val acc: 0.3072 ± 0.0717, train stats ebd_grad: 1.7957, clf_grad: 1.6432 22/09/30 09:31:24, ep 57, val acc: 0.3070 ± 0.0757, train stats ebd_grad: 1.6782, clf_grad: 1.6280 22/09/30 09:31:58, ep 58, val acc: 0.2934 ± 0.0678, train stats ebd_grad: 1.7224, clf_grad: 1.5827 22/09/30 09:31:58, End of training. Restore the best weights 22/09/30 09:32:15, acc mean 0.3062, std 0.0801
22/09/30 09:34:48, acc mean 0.3079, std 0.0877 `