Closed jianghaojun closed 2 years ago
Hi @jianghaojun, sorry for the delayed reply. The frame length is an important factor in affecting the results (Figure 3). So maybe the fps=1 is reasonable for R@1=37.9.
@ArrowLuo Hi, thanks for your relpy.
The frame length is still set as 64 in training.
I only extracted images from videos with fps=1(original DataLoader also use video_framerate=1, shown as below Line21). For example, I will extract 120 images from a video with 120s duration. Then, uniformly sampling 64 images for training.
Hi @jianghaojun, thanks, I got it. How about other results, e.g., R@5, R@10?
@ArrowLuo Hi, other metrics are R@5=70.1, R@10=82.4, MeanR=8.1.
Hi @jianghaojun, I can not find the reason. If your hyper-parameters are the same as ours, the results will be similar and the gap will be small. Can you check your epochs, batch size, lr, max_words again?
@ArrowLuo Following is my log and yaml. I use gradient accumulation=2, thus the batchsize for one forward is 64.
2022-10-15 14:52:53,757:INFO: device: cuda:3 n_gpu: 8
2022-10-15 14:52:53,760:INFO: device: cuda:1 n_gpu: 8
2022-10-15 14:52:53,760:INFO: device: cuda:2 n_gpu: 8
2022-10-15 14:52:53,760:INFO: device: cuda:4 n_gpu: 8
2022-10-15 14:52:53,760:INFO: device: cuda:5 n_gpu: 8
2022-10-15 14:52:53,762:INFO: device: cuda:7 n_gpu: 8
2022-10-15 14:52:53,762:INFO: device: cuda:0 n_gpu: 8
2022-10-15 14:52:53,763:INFO: device: cuda:6 n_gpu: 8
2022-10-15 14:52:55,216:INFO: loading archive file /cluster/home/hjjiang/clip4clip/modules/cross-base
2022-10-15 14:52:55,217:INFO: Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 512,
"initializer_range": 0.02,
"intermediate_size": 2048,
"max_position_embeddings": 128,
"num_attention_heads": 8,
"num_hidden_layers": 4,
"type_vocab_size": 2,
"vocab_size": 512
}
2022-10-15 14:52:55,217:INFO: Weight doesn't exsits. /cluster/home/hjjiang/clip4clip/modules/cross-base/cross_pytorch_model.bin
2022-10-15 14:52:55,217:WARNING: Stage-One:True, Stage-Two:False
2022-10-15 14:52:55,217:WARNING: Test retrieval by loose type.
2022-10-15 14:52:55,217:WARNING: embed_dim: 512
2022-10-15 14:52:55,217:WARNING: image_resolution: 224
2022-10-15 14:52:55,217:WARNING: vision_layers: 12
2022-10-15 14:52:55,217:WARNING: vision_width: 768
2022-10-15 14:52:55,217:WARNING: vision_patch_size: 32
2022-10-15 14:52:55,217:WARNING: context_length: 77
2022-10-15 14:52:55,217:WARNING: vocab_size: 49408
2022-10-15 14:52:55,217:WARNING: transformer_width: 512
2022-10-15 14:52:55,217:WARNING: transformer_heads: 8
2022-10-15 14:52:55,218:WARNING: transformer_layers: 12
2022-10-15 14:52:55,218:WARNING: linear_patch: 2d
2022-10-15 14:52:55,218:WARNING: cut_top_layer: 0
2022-10-15 14:52:56,724:WARNING: sim_header: meanP
2022-10-15 14:53:00,107:INFO: --------------------
2022-10-15 14:53:00,108:INFO: Weights from pretrained model not used in CLIP4Clip:
clip.input_resolution
clip.context_length
clip.vocab_size
2022-10-15 14:53:01,643:INFO: ***** Running test *****
2022-10-15 14:53:01,644:INFO: Num examples = 4917
2022-10-15 14:53:01,644:INFO: Batch size = 32
2022-10-15 14:53:01,644:INFO: Num steps = 154
2022-10-15 14:53:01,644:INFO: ***** Running val *****
2022-10-15 14:53:01,644:INFO: Num examples = 4917
2022-10-15 14:53:03,670:INFO: ***** Running training *****
2022-10-15 14:53:03,670:INFO: Num examples = 10009
2022-10-15 14:53:03,671:INFO: Batch size = 64
2022-10-15 14:53:03,671:INFO: Num steps = 785
2022-10-15 14:53:14,735:INFO: Reducer buckets have been rebuilt in this iteration.
2022-10-15 14:53:14,839:INFO: Reducer buckets have been rebuilt in this iteration.
2022-10-15 14:53:14,839:INFO: Reducer buckets have been rebuilt in this iteration.
2022-10-15 14:53:14,855:INFO: Reducer buckets have been rebuilt in this iteration.
2022-10-15 14:53:14,855:INFO: Reducer buckets have been rebuilt in this iteration.
2022-10-15 14:53:14,855:INFO: Reducer buckets have been rebuilt in this iteration.
2022-10-15 14:53:14,856:INFO: Reducer buckets have been rebuilt in this iteration.
2022-10-15 14:53:14,856:INFO: Reducer buckets have been rebuilt in this iteration.
2022-10-15 14:53:15,392:INFO: Epoch: 1/5, Step: 2/156, Loss: 0.292273, Time/step: 5.832319
2022-10-15 14:53:16,734:INFO: Epoch: 1/5, Step: 4/156, Loss: 0.374614, Time/step: 0.666542
2022-10-15 14:53:18,049:INFO: Epoch: 1/5, Step: 6/156, Loss: 0.346119, Time/step: 0.655135
2022-10-15 14:53:19,355:INFO: Epoch: 1/5, Step: 8/156, Loss: 0.302834, Time/step: 0.650446
2022-10-15 14:53:20,836:INFO: Epoch: 1/5, Step: 10/156, Loss: 0.223408, Time/step: 0.737671
2022-10-15 14:53:22,213:INFO: Epoch: 1/5, Step: 12/156, Loss: 0.345664, Time/step: 0.686087
2022-10-15 14:53:23,695:INFO: Epoch: 1/5, Step: 14/156, Loss: 0.239475, Time/step: 0.737913
2022-10-15 14:53:25,313:INFO: Epoch: 1/5, Step: 16/156, Loss: 0.458020, Time/step: 0.806698
2022-10-15 14:53:26,789:INFO: Epoch: 1/5, Step: 18/156, Loss: 0.292199, Time/step: 0.734792
2022-10-15 14:53:28,169:INFO: Epoch: 1/5, Step: 20/156, Loss: 0.296925, Time/step: 0.687182
2022-10-15 14:53:29,701:INFO: Epoch: 1/5, Step: 22/156, Loss: 0.285679, Time/step: 0.763596
2022-10-15 14:53:31,025:INFO: Epoch: 1/5, Step: 24/156, Loss: 0.326472, Time/step: 0.659039
2022-10-15 14:53:32,418:INFO: Epoch: 1/5, Step: 26/156, Loss: 0.219738, Time/step: 0.693611
2022-10-15 14:53:33,924:INFO: Epoch: 1/5, Step: 28/156, Loss: 0.198181, Time/step: 0.750283
2022-10-15 14:53:35,238:INFO: Epoch: 1/5, Step: 30/156, Loss: 0.216522, Time/step: 0.654686
2022-10-15 14:53:36,602:INFO: Epoch: 1/5, Step: 32/156, Loss: 0.191322, Time/step: 0.679211
2022-10-15 14:53:38,101:INFO: Epoch: 1/5, Step: 34/156, Loss: 0.191361, Time/step: 0.746830
2022-10-15 14:53:39,515:INFO: Epoch: 1/5, Step: 36/156, Loss: 0.178442, Time/step: 0.703742
2022-10-15 14:53:40,990:INFO: Epoch: 1/5, Step: 38/156, Loss: 0.131960, Time/step: 0.734914
2022-10-15 14:53:42,345:INFO: Epoch: 1/5, Step: 40/156, Loss: 0.179590, Time/step: 0.674121
2022-10-15 14:53:43,781:INFO: Epoch: 1/5, Step: 42/156, Loss: 0.286277, Time/step: 0.715329
2022-10-15 14:53:45,179:INFO: Epoch: 1/5, Step: 44/156, Loss: 0.288163, Time/step: 0.696061
2022-10-15 14:53:46,600:INFO: Epoch: 1/5, Step: 46/156, Loss: 0.139363, Time/step: 0.707753
2022-10-15 14:53:47,983:INFO: Epoch: 1/5, Step: 48/156, Loss: 0.139455, Time/step: 0.688683
2022-10-15 14:53:49,346:INFO: Epoch: 1/5, Step: 50/156, Loss: 0.120081, Time/step: 0.678481
2022-10-15 14:53:50,897:INFO: Epoch: 1/5, Step: 52/156, Loss: 0.083034, Time/step: 0.772170
2022-10-15 14:53:52,297:INFO: Epoch: 1/5, Step: 54/156, Loss: 0.203893, Time/step: 0.697293
2022-10-15 14:53:53,740:INFO: Epoch: 1/5, Step: 56/156, Loss: 0.135466, Time/step: 0.718830
2022-10-15 14:53:55,126:INFO: Epoch: 1/5, Step: 58/156, Loss: 0.179435, Time/step: 0.690054
2022-10-15 14:53:56,499:INFO: Epoch: 1/5, Step: 60/156, Loss: 0.154588, Time/step: 0.683420
2022-10-15 14:53:57,947:INFO: Epoch: 1/5, Step: 62/156, Loss: 0.258550, Time/step: 0.720860
2022-10-15 14:53:59,325:INFO: Epoch: 1/5, Step: 64/156, Loss: 0.105858, Time/step: 0.686110
2022-10-15 14:54:00,860:INFO: Epoch: 1/5, Step: 66/156, Loss: 0.166507, Time/step: 0.764340
2022-10-15 14:54:02,191:INFO: Epoch: 1/5, Step: 68/156, Loss: 0.195220, Time/step: 0.662432
2022-10-15 14:54:03,566:INFO: Epoch: 1/5, Step: 70/156, Loss: 0.118039, Time/step: 0.684544
2022-10-15 14:54:05,022:INFO: Epoch: 1/5, Step: 72/156, Loss: 0.127034, Time/step: 0.725034
2022-10-15 14:54:06,447:INFO: Epoch: 1/5, Step: 74/156, Loss: 0.109204, Time/step: 0.709704
2022-10-15 14:54:07,921:INFO: Epoch: 1/5, Step: 76/156, Loss: 0.170604, Time/step: 0.734332
2022-10-15 14:54:09,298:INFO: Epoch: 1/5, Step: 78/156, Loss: 0.149804, Time/step: 0.685487
2022-10-15 14:54:10,786:INFO: Epoch: 1/5, Step: 80/156, Loss: 0.087503, Time/step: 0.741195
2022-10-15 14:54:12,131:INFO: Epoch: 1/5, Step: 82/156, Loss: 0.091146, Time/step: 0.669689
2022-10-15 14:54:13,590:INFO: Epoch: 1/5, Step: 84/156, Loss: 0.104996, Time/step: 0.726749
2022-10-15 14:54:15,027:INFO: Epoch: 1/5, Step: 86/156, Loss: 0.128207, Time/step: 0.715320
2022-10-15 14:54:16,446:INFO: Epoch: 1/5, Step: 88/156, Loss: 0.136580, Time/step: 0.706711
2022-10-15 14:54:17,834:INFO: Epoch: 1/5, Step: 90/156, Loss: 0.133839, Time/step: 0.691316
2022-10-15 14:54:19,230:INFO: Epoch: 1/5, Step: 92/156, Loss: 0.076821, Time/step: 0.695369
2022-10-15 14:54:20,659:INFO: Epoch: 1/5, Step: 94/156, Loss: 0.101285, Time/step: 0.711619
2022-10-15 14:54:22,161:INFO: Epoch: 1/5, Step: 96/156, Loss: 0.127516, Time/step: 0.747513
2022-10-15 14:54:23,555:INFO: Epoch: 1/5, Step: 98/156, Loss: 0.112362, Time/step: 0.694064
2022-10-15 14:54:24,941:INFO: Epoch: 1/5, Step: 100/156, Loss: 0.094079, Time/step: 0.690024
2022-10-15 14:54:26,290:INFO: Epoch: 1/5, Step: 102/156, Loss: 0.081938, Time/step: 0.671399
2022-10-15 14:54:27,668:INFO: Epoch: 1/5, Step: 104/156, Loss: 0.175643, Time/step: 0.685600
2022-10-15 14:54:29,093:INFO: Epoch: 1/5, Step: 106/156, Loss: 0.147155, Time/step: 0.709817
2022-10-15 14:54:30,479:INFO: Epoch: 1/5, Step: 108/156, Loss: 0.100715, Time/step: 0.690177
2022-10-15 14:54:31,897:INFO: Epoch: 1/5, Step: 110/156, Loss: 0.192052, Time/step: 0.706248
2022-10-15 14:54:33,393:INFO: Epoch: 1/5, Step: 112/156, Loss: 0.151776, Time/step: 0.745112
2022-10-15 14:54:34,801:INFO: Epoch: 1/5, Step: 114/156, Loss: 0.167991, Time/step: 0.700992
2022-10-15 14:54:36,236:INFO: Epoch: 1/5, Step: 116/156, Loss: 0.128237, Time/step: 0.714860
2022-10-15 14:54:37,758:INFO: Epoch: 1/5, Step: 118/156, Loss: 0.112536, Time/step: 0.757931
2022-10-15 14:54:39,155:INFO: Epoch: 1/5, Step: 120/156, Loss: 0.112582, Time/step: 0.695804
2022-10-15 14:54:40,510:INFO: Epoch: 1/5, Step: 122/156, Loss: 0.092542, Time/step: 0.674180
2022-10-15 14:54:41,898:INFO: Epoch: 1/5, Step: 124/156, Loss: 0.056119, Time/step: 0.691052
2022-10-15 14:54:43,297:INFO: Epoch: 1/5, Step: 126/156, Loss: 0.104303, Time/step: 0.696558
2022-10-15 14:54:44,763:INFO: Epoch: 1/5, Step: 128/156, Loss: 0.148606, Time/step: 0.729959
2022-10-15 14:54:46,162:INFO: Epoch: 1/5, Step: 130/156, Loss: 0.246882, Time/step: 0.696581
2022-10-15 14:54:47,493:INFO: Epoch: 1/5, Step: 132/156, Loss: 0.091124, Time/step: 0.663009
2022-10-15 14:54:48,856:INFO: Epoch: 1/5, Step: 134/156, Loss: 0.119128, Time/step: 0.678598
2022-10-15 14:54:50,185:INFO: Epoch: 1/5, Step: 136/156, Loss: 0.158681, Time/step: 0.661030
2022-10-15 14:54:51,606:INFO: Epoch: 1/5, Step: 138/156, Loss: 0.068085, Time/step: 0.707698
2022-10-15 14:54:52,999:INFO: Epoch: 1/5, Step: 140/156, Loss: 0.073781, Time/step: 0.693268
2022-10-15 14:54:54,307:INFO: Epoch: 1/5, Step: 142/156, Loss: 0.097889, Time/step: 0.650961
2022-10-15 14:54:55,520:INFO: Epoch: 1/5, Step: 144/156, Loss: 0.089500, Time/step: 0.603928
2022-10-15 14:54:56,740:INFO: Epoch: 1/5, Step: 146/156, Loss: 0.094724, Time/step: 0.607477
2022-10-15 14:54:57,886:INFO: Epoch: 1/5, Step: 148/156, Loss: 0.157067, Time/step: 0.570256
2022-10-15 14:54:59,027:INFO: Epoch: 1/5, Step: 150/156, Loss: 0.091411, Time/step: 0.568419
2022-10-15 14:55:00,172:INFO: Epoch: 1/5, Step: 152/156, Loss: 0.055013, Time/step: 0.569932
2022-10-15 14:55:01,355:INFO: Epoch: 1/5, Step: 154/156, Loss: 0.028479, Time/step: 0.589243
2022-10-15 14:55:02,526:INFO: Epoch: 1/5, Step: 156/156, Loss: 0.048552, Time/step: 0.582880
2022-10-15 14:55:02,927:INFO: Epoch 1/5 Finished, Train Loss: 0.163470
2022-10-15 14:59:06,159:INFO: sim matrix size: 4917, 4917
2022-10-15 14:59:06,944:INFO: Length-T: 4917, Length-V:4917
2022-10-15 14:59:06,945:INFO: Text-to-Video:
2022-10-15 14:59:06,945:INFO: >>> R@1: 34.0 - R@5: 64.1 - R@10: 77.7 - Median R: 3.0 - Mean R: 10.5
2022-10-15 14:59:06,945:INFO: Video-to-Text:
2022-10-15 14:59:06,945:INFO: >>> V2T$R@1: 36.2 - V2T$R@5: 67.1 - V2T$R@10: 79.3 - V2T$Median R: 3.0 - V2T$Mean R: 9.5
2022-10-15 14:59:06,958:INFO: the R1 is: 34.0045
2022-10-15 14:59:14,008:INFO: Epoch: 2/5, Step: 2/156, Loss: 0.071813, Time/step: 2.511979
2022-10-15 14:59:15,228:INFO: Epoch: 2/5, Step: 4/156, Loss: 0.135356, Time/step: 0.604794
2022-10-15 14:59:16,491:INFO: Epoch: 2/5, Step: 6/156, Loss: 0.083077, Time/step: 0.627644
2022-10-15 14:59:17,853:INFO: Epoch: 2/5, Step: 8/156, Loss: 0.092368, Time/step: 0.676749
2022-10-15 14:59:19,236:INFO: Epoch: 2/5, Step: 10/156, Loss: 0.054984, Time/step: 0.686914
2022-10-15 14:59:20,680:INFO: Epoch: 2/5, Step: 12/156, Loss: 0.038660, Time/step: 0.717959
2022-10-15 14:59:21,995:INFO: Epoch: 2/5, Step: 14/156, Loss: 0.073670, Time/step: 0.653063
2022-10-15 14:59:23,530:INFO: Epoch: 2/5, Step: 16/156, Loss: 0.124033, Time/step: 0.763671
2022-10-15 14:59:25,049:INFO: Epoch: 2/5, Step: 18/156, Loss: 0.040721, Time/step: 0.755016
2022-10-15 14:59:26,422:INFO: Epoch: 2/5, Step: 20/156, Loss: 0.082017, Time/step: 0.681849
2022-10-15 14:59:27,835:INFO: Epoch: 2/5, Step: 22/156, Loss: 0.065632, Time/step: 0.702924
2022-10-15 14:59:29,342:INFO: Epoch: 2/5, Step: 24/156, Loss: 0.047335, Time/step: 0.749635
2022-10-15 14:59:30,862:INFO: Epoch: 2/5, Step: 26/156, Loss: 0.105558, Time/step: 0.755910
2022-10-15 14:59:32,410:INFO: Epoch: 2/5, Step: 28/156, Loss: 0.055878, Time/step: 0.769959
2022-10-15 14:59:33,846:INFO: Epoch: 2/5, Step: 30/156, Loss: 0.039998, Time/step: 0.714664
2022-10-15 14:59:35,191:INFO: Epoch: 2/5, Step: 32/156, Loss: 0.118170, Time/step: 0.668574
2022-10-15 14:59:36,681:INFO: Epoch: 2/5, Step: 34/156, Loss: 0.104301, Time/step: 0.740636
2022-10-15 14:59:38,208:INFO: Epoch: 2/5, Step: 36/156, Loss: 0.046775, Time/step: 0.759876
2022-10-15 14:59:39,591:INFO: Epoch: 2/5, Step: 38/156, Loss: 0.098973, Time/step: 0.687686
2022-10-15 14:59:41,023:INFO: Epoch: 2/5, Step: 40/156, Loss: 0.096516, Time/step: 0.711912
2022-10-15 14:59:42,402:INFO: Epoch: 2/5, Step: 42/156, Loss: 0.070437, Time/step: 0.686183
2022-10-15 14:59:43,812:INFO: Epoch: 2/5, Step: 44/156, Loss: 0.102682, Time/step: 0.701433
2022-10-15 14:59:45,318:INFO: Epoch: 2/5, Step: 46/156, Loss: 0.124937, Time/step: 0.748790
2022-10-15 14:59:46,799:INFO: Epoch: 2/5, Step: 48/156, Loss: 0.127601, Time/step: 0.736812
2022-10-15 14:59:48,156:INFO: Epoch: 2/5, Step: 50/156, Loss: 0.074532, Time/step: 0.674347
2022-10-15 14:59:49,620:INFO: Epoch: 2/5, Step: 52/156, Loss: 0.061653, Time/step: 0.727960
2022-10-15 14:59:50,991:INFO: Epoch: 2/5, Step: 54/156, Loss: 0.052438, Time/step: 0.681994
2022-10-15 14:59:52,338:INFO: Epoch: 2/5, Step: 56/156, Loss: 0.052655, Time/step: 0.668750
2022-10-15 14:59:53,820:INFO: Epoch: 2/5, Step: 58/156, Loss: 0.098682, Time/step: 0.737091
2022-10-15 14:59:55,345:INFO: Epoch: 2/5, Step: 60/156, Loss: 0.050309, Time/step: 0.758664
2022-10-15 14:59:56,843:INFO: Epoch: 2/5, Step: 62/156, Loss: 0.083033, Time/step: 0.744676
2022-10-15 14:59:58,250:INFO: Epoch: 2/5, Step: 64/156, Loss: 0.038846, Time/step: 0.698906
2022-10-15 14:59:59,776:INFO: Epoch: 2/5, Step: 66/156, Loss: 0.047035, Time/step: 0.758955
2022-10-15 15:00:01,204:INFO: Epoch: 2/5, Step: 68/156, Loss: 0.059525, Time/step: 0.710714
2022-10-15 15:00:02,644:INFO: Epoch: 2/5, Step: 70/156, Loss: 0.042412, Time/step: 0.715914
2022-10-15 15:00:03,969:INFO: Epoch: 2/5, Step: 72/156, Loss: 0.061419, Time/step: 0.658629
2022-10-15 15:00:05,382:INFO: Epoch: 2/5, Step: 74/156, Loss: 0.051010, Time/step: 0.701558
2022-10-15 15:00:06,894:INFO: Epoch: 2/5, Step: 76/156, Loss: 0.096196, Time/step: 0.751961
2022-10-15 15:00:08,273:INFO: Epoch: 2/5, Step: 78/156, Loss: 0.057687, Time/step: 0.684909
2022-10-15 15:00:09,707:INFO: Epoch: 2/5, Step: 80/156, Loss: 0.040132, Time/step: 0.712032
2022-10-15 15:00:11,121:INFO: Epoch: 2/5, Step: 82/156, Loss: 0.068585, Time/step: 0.703224
2022-10-15 15:00:12,535:INFO: Epoch: 2/5, Step: 84/156, Loss: 0.079730, Time/step: 0.702544
2022-10-15 15:00:13,904:INFO: Epoch: 2/5, Step: 86/156, Loss: 0.059014, Time/step: 0.680435
2022-10-15 15:00:15,335:INFO: Epoch: 2/5, Step: 88/156, Loss: 0.061798, Time/step: 0.711914
2022-10-15 15:00:16,704:INFO: Epoch: 2/5, Step: 90/156, Loss: 0.088087, Time/step: 0.680330
2022-10-15 15:00:18,167:INFO: Epoch: 2/5, Step: 92/156, Loss: 0.099840, Time/step: 0.726992
2022-10-15 15:00:19,640:INFO: Epoch: 2/5, Step: 94/156, Loss: 0.081870, Time/step: 0.732708
2022-10-15 15:00:21,205:INFO: Epoch: 2/5, Step: 96/156, Loss: 0.126197, Time/step: 0.778372
2022-10-15 15:00:22,718:INFO: Epoch: 2/5, Step: 98/156, Loss: 0.093007, Time/step: 0.752368
2022-10-15 15:00:24,147:INFO: Epoch: 2/5, Step: 100/156, Loss: 0.135481, Time/step: 0.710197
2022-10-15 15:00:25,507:INFO: Epoch: 2/5, Step: 102/156, Loss: 0.106272, Time/step: 0.676436
2022-10-15 15:00:26,958:INFO: Epoch: 2/5, Step: 104/156, Loss: 0.082325, Time/step: 0.721035
2022-10-15 15:00:28,404:INFO: Epoch: 2/5, Step: 106/156, Loss: 0.054531, Time/step: 0.718407
2022-10-15 15:00:29,833:INFO: Epoch: 2/5, Step: 108/156, Loss: 0.081617, Time/step: 0.710026
2022-10-15 15:00:31,179:INFO: Epoch: 2/5, Step: 110/156, Loss: 0.058341, Time/step: 0.669058
2022-10-15 15:00:32,573:INFO: Epoch: 2/5, Step: 112/156, Loss: 0.063207, Time/step: 0.692488
2022-10-15 15:00:34,101:INFO: Epoch: 2/5, Step: 114/156, Loss: 0.095330, Time/step: 0.759518
2022-10-15 15:00:35,524:INFO: Epoch: 2/5, Step: 116/156, Loss: 0.059384, Time/step: 0.707267
2022-10-15 15:00:37,018:INFO: Epoch: 2/5, Step: 118/156, Loss: 0.065741, Time/step: 0.742912
2022-10-15 15:00:38,487:INFO: Epoch: 2/5, Step: 120/156, Loss: 0.097826, Time/step: 0.730239
2022-10-15 15:00:39,983:INFO: Epoch: 2/5, Step: 122/156, Loss: 0.046098, Time/step: 0.743992
2022-10-15 15:00:41,389:INFO: Epoch: 2/5, Step: 124/156, Loss: 0.060407, Time/step: 0.696724
2022-10-15 15:00:42,723:INFO: Epoch: 2/5, Step: 126/156, Loss: 0.063605, Time/step: 0.662269
2022-10-15 15:00:44,144:INFO: Epoch: 2/5, Step: 128/156, Loss: 0.076046, Time/step: 0.706635
2022-10-15 15:00:45,512:INFO: Epoch: 2/5, Step: 130/156, Loss: 0.090085, Time/step: 0.679856
2022-10-15 15:00:46,909:INFO: Epoch: 2/5, Step: 132/156, Loss: 0.046405, Time/step: 0.693775
2022-10-15 15:00:48,333:INFO: Epoch: 2/5, Step: 134/156, Loss: 0.055695, Time/step: 0.708253
2022-10-15 15:00:49,847:INFO: Epoch: 2/5, Step: 136/156, Loss: 0.144003, Time/step: 0.752200
2022-10-15 15:00:51,420:INFO: Epoch: 2/5, Step: 138/156, Loss: 0.102659, Time/step: 0.782245
2022-10-15 15:00:52,930:INFO: Epoch: 2/5, Step: 140/156, Loss: 0.120012, Time/step: 0.750698
2022-10-15 15:00:54,301:INFO: Epoch: 2/5, Step: 142/156, Loss: 0.108267, Time/step: 0.681149
2022-10-15 15:00:55,521:INFO: Epoch: 2/5, Step: 144/156, Loss: 0.058467, Time/step: 0.605985
2022-10-15 15:00:56,695:INFO: Epoch: 2/5, Step: 146/156, Loss: 0.060956, Time/step: 0.583754
2022-10-15 15:00:57,947:INFO: Epoch: 2/5, Step: 148/156, Loss: 0.071952, Time/step: 0.622206
2022-10-15 15:00:59,092:INFO: Epoch: 2/5, Step: 150/156, Loss: 0.073149, Time/step: 0.569158
2022-10-15 15:01:00,309:INFO: Epoch: 2/5, Step: 152/156, Loss: 0.076030, Time/step: 0.604267
2022-10-15 15:01:01,503:INFO: Epoch: 2/5, Step: 154/156, Loss: 0.076036, Time/step: 0.592892
2022-10-15 15:01:02,749:INFO: Epoch: 2/5, Step: 156/156, Loss: 0.085525, Time/step: 0.619012
2022-10-15 15:01:03,048:INFO: Epoch 2/5 Finished, Train Loss: 0.074089
2022-10-15 15:04:49,914:INFO: sim matrix size: 4917, 4917
2022-10-15 15:04:50,698:INFO: Length-T: 4917, Length-V:4917
2022-10-15 15:04:50,699:INFO: Text-to-Video:
2022-10-15 15:04:50,699:INFO: >>> R@1: 36.5 - R@5: 67.3 - R@10: 80.3 - Median R: 2.0 - Mean R: 9.2
2022-10-15 15:04:50,699:INFO: Video-to-Text:
2022-10-15 15:04:50,699:INFO: >>> V2T$R@1: 38.7 - V2T$R@5: 69.2 - V2T$R@10: 81.7 - V2T$Median R: 2.0 - V2T$Mean R: 8.3
2022-10-15 15:04:50,715:INFO: the R1 is: 36.5467
2022-10-15 15:04:58,644:INFO: Epoch: 3/5, Step: 2/156, Loss: 0.078574, Time/step: 2.838628
2022-10-15 15:04:59,862:INFO: Epoch: 3/5, Step: 4/156, Loss: 0.051964, Time/step: 0.601940
2022-10-15 15:05:01,137:INFO: Epoch: 3/5, Step: 6/156, Loss: 0.057696, Time/step: 0.632153
2022-10-15 15:05:02,483:INFO: Epoch: 3/5, Step: 8/156, Loss: 0.081261, Time/step: 0.666730
2022-10-15 15:05:03,908:INFO: Epoch: 3/5, Step: 10/156, Loss: 0.044793, Time/step: 0.706744
2022-10-15 15:05:05,256:INFO: Epoch: 3/5, Step: 12/156, Loss: 0.040284, Time/step: 0.668149
2022-10-15 15:05:06,644:INFO: Epoch: 3/5, Step: 14/156, Loss: 0.071116, Time/step: 0.688839
2022-10-15 15:05:07,945:INFO: Epoch: 3/5, Step: 16/156, Loss: 0.049946, Time/step: 0.644190
2022-10-15 15:05:09,313:INFO: Epoch: 3/5, Step: 18/156, Loss: 0.059890, Time/step: 0.677928
2022-10-15 15:05:10,804:INFO: Epoch: 3/5, Step: 20/156, Loss: 0.049286, Time/step: 0.739582
2022-10-15 15:05:12,218:INFO: Epoch: 3/5, Step: 22/156, Loss: 0.054587, Time/step: 0.700641
2022-10-15 15:05:13,524:INFO: Epoch: 3/5, Step: 24/156, Loss: 0.050530, Time/step: 0.647337
2022-10-15 15:05:14,943:INFO: Epoch: 3/5, Step: 26/156, Loss: 0.032400, Time/step: 0.703734
2022-10-15 15:05:16,411:INFO: Epoch: 3/5, Step: 28/156, Loss: 0.024327, Time/step: 0.728213
2022-10-15 15:05:17,765:INFO: Epoch: 3/5, Step: 30/156, Loss: 0.038212, Time/step: 0.671456
2022-10-15 15:05:19,149:INFO: Epoch: 3/5, Step: 32/156, Loss: 0.112221, Time/step: 0.686344
2022-10-15 15:05:20,575:INFO: Epoch: 3/5, Step: 34/156, Loss: 0.104086, Time/step: 0.707630
2022-10-15 15:05:21,928:INFO: Epoch: 3/5, Step: 36/156, Loss: 0.087453, Time/step: 0.670072
2022-10-15 15:05:23,260:INFO: Epoch: 3/5, Step: 38/156, Loss: 0.026864, Time/step: 0.660870
2022-10-15 15:05:24,666:INFO: Epoch: 3/5, Step: 40/156, Loss: 0.066780, Time/step: 0.697640
2022-10-15 15:05:26,080:INFO: Epoch: 3/5, Step: 42/156, Loss: 0.050706, Time/step: 0.701501
2022-10-15 15:05:27,558:INFO: Epoch: 3/5, Step: 44/156, Loss: 0.096249, Time/step: 0.733648
2022-10-15 15:05:28,899:INFO: Epoch: 3/5, Step: 46/156, Loss: 0.021501, Time/step: 0.665192
2022-10-15 15:05:30,229:INFO: Epoch: 3/5, Step: 48/156, Loss: 0.057558, Time/step: 0.660500
2022-10-15 15:05:31,561:INFO: Epoch: 3/5, Step: 50/156, Loss: 0.058221, Time/step: 0.659983
2022-10-15 15:05:33,035:INFO: Epoch: 3/5, Step: 52/156, Loss: 0.052542, Time/step: 0.732136
2022-10-15 15:05:34,472:INFO: Epoch: 3/5, Step: 54/156, Loss: 0.052662, Time/step: 0.713007
2022-10-15 15:05:35,802:INFO: Epoch: 3/5, Step: 56/156, Loss: 0.037531, Time/step: 0.658948
2022-10-15 15:05:37,155:INFO: Epoch: 3/5, Step: 58/156, Loss: 0.088174, Time/step: 0.671325
2022-10-15 15:05:38,534:INFO: Epoch: 3/5, Step: 60/156, Loss: 0.060356, Time/step: 0.684115
2022-10-15 15:05:39,899:INFO: Epoch: 3/5, Step: 62/156, Loss: 0.048428, Time/step: 0.677037
2022-10-15 15:05:41,223:INFO: Epoch: 3/5, Step: 64/156, Loss: 0.034818, Time/step: 0.656009
2022-10-15 15:05:42,643:INFO: Epoch: 3/5, Step: 66/156, Loss: 0.080204, Time/step: 0.704812
2022-10-15 15:05:44,038:INFO: Epoch: 3/5, Step: 68/156, Loss: 0.043664, Time/step: 0.691753
2022-10-15 15:05:45,400:INFO: Epoch: 3/5, Step: 70/156, Loss: 0.051267, Time/step: 0.676016
2022-10-15 15:05:46,801:INFO: Epoch: 3/5, Step: 72/156, Loss: 0.071173, Time/step: 0.695738
2022-10-15 15:05:48,115:INFO: Epoch: 3/5, Step: 74/156, Loss: 0.042207, Time/step: 0.651419
2022-10-15 15:05:49,466:INFO: Epoch: 3/5, Step: 76/156, Loss: 0.017341, Time/step: 0.670137
2022-10-15 15:05:50,830:INFO: Epoch: 3/5, Step: 78/156, Loss: 0.071745, Time/step: 0.676390
2022-10-15 15:05:52,199:INFO: Epoch: 3/5, Step: 80/156, Loss: 0.028092, Time/step: 0.677772
2022-10-15 15:05:53,572:INFO: Epoch: 3/5, Step: 82/156, Loss: 0.065276, Time/step: 0.680575
2022-10-15 15:05:54,995:INFO: Epoch: 3/5, Step: 84/156, Loss: 0.044051, Time/step: 0.705969
2022-10-15 15:05:56,252:INFO: Epoch: 3/5, Step: 86/156, Loss: 0.055526, Time/step: 0.622947
2022-10-15 15:05:57,647:INFO: Epoch: 3/5, Step: 88/156, Loss: 0.040447, Time/step: 0.689644
2022-10-15 15:05:59,077:INFO: Epoch: 3/5, Step: 90/156, Loss: 0.025318, Time/step: 0.710143
2022-10-15 15:06:00,499:INFO: Epoch: 3/5, Step: 92/156, Loss: 0.059200, Time/step: 0.703760
2022-10-15 15:06:01,775:INFO: Epoch: 3/5, Step: 94/156, Loss: 0.047159, Time/step: 0.633032
2022-10-15 15:06:03,160:INFO: Epoch: 3/5, Step: 96/156, Loss: 0.063278, Time/step: 0.686760
2022-10-15 15:06:04,645:INFO: Epoch: 3/5, Step: 98/156, Loss: 0.038574, Time/step: 0.736758
2022-10-15 15:06:06,010:INFO: Epoch: 3/5, Step: 100/156, Loss: 0.073694, Time/step: 0.676854
2022-10-15 15:06:07,360:INFO: Epoch: 3/5, Step: 102/156, Loss: 0.017323, Time/step: 0.669913
2022-10-15 15:06:08,701:INFO: Epoch: 3/5, Step: 104/156, Loss: 0.017194, Time/step: 0.664249
2022-10-15 15:06:10,013:INFO: Epoch: 3/5, Step: 106/156, Loss: 0.059221, Time/step: 0.649596
2022-10-15 15:06:11,408:INFO: Epoch: 3/5, Step: 108/156, Loss: 0.030529, Time/step: 0.692832
2022-10-15 15:06:12,843:INFO: Epoch: 3/5, Step: 110/156, Loss: 0.021755, Time/step: 0.712191
2022-10-15 15:06:14,198:INFO: Epoch: 3/5, Step: 112/156, Loss: 0.048228, Time/step: 0.671760
2022-10-15 15:06:15,432:INFO: Epoch: 3/5, Step: 114/156, Loss: 0.081141, Time/step: 0.612210
2022-10-15 15:06:16,854:INFO: Epoch: 3/5, Step: 116/156, Loss: 0.068251, Time/step: 0.704722
2022-10-15 15:06:18,260:INFO: Epoch: 3/5, Step: 118/156, Loss: 0.046133, Time/step: 0.698120
2022-10-15 15:06:19,565:INFO: Epoch: 3/5, Step: 120/156, Loss: 0.051103, Time/step: 0.647473
2022-10-15 15:06:20,972:INFO: Epoch: 3/5, Step: 122/156, Loss: 0.123327, Time/step: 0.698069
2022-10-15 15:06:22,372:INFO: Epoch: 3/5, Step: 124/156, Loss: 0.033074, Time/step: 0.694078
2022-10-15 15:06:23,814:INFO: Epoch: 3/5, Step: 126/156, Loss: 0.030968, Time/step: 0.715370
2022-10-15 15:06:25,178:INFO: Epoch: 3/5, Step: 128/156, Loss: 0.049933, Time/step: 0.676062
2022-10-15 15:06:26,595:INFO: Epoch: 3/5, Step: 130/156, Loss: 0.037962, Time/step: 0.702900
2022-10-15 15:06:28,042:INFO: Epoch: 3/5, Step: 132/156, Loss: 0.046380, Time/step: 0.717823
2022-10-15 15:06:29,386:INFO: Epoch: 3/5, Step: 134/156, Loss: 0.036981, Time/step: 0.666694
2022-10-15 15:06:30,708:INFO: Epoch: 3/5, Step: 136/156, Loss: 0.050776, Time/step: 0.655037
2022-10-15 15:06:32,043:INFO: Epoch: 3/5, Step: 138/156, Loss: 0.046314, Time/step: 0.661792
2022-10-15 15:06:33,417:INFO: Epoch: 3/5, Step: 140/156, Loss: 0.021517, Time/step: 0.681316
2022-10-15 15:06:34,688:INFO: Epoch: 3/5, Step: 142/156, Loss: 0.048246, Time/step: 0.629520
2022-10-15 15:06:35,872:INFO: Epoch: 3/5, Step: 144/156, Loss: 0.037371, Time/step: 0.586509
2022-10-15 15:06:37,035:INFO: Epoch: 3/5, Step: 146/156, Loss: 0.024335, Time/step: 0.576310
2022-10-15 15:06:38,318:INFO: Epoch: 3/5, Step: 148/156, Loss: 0.085299, Time/step: 0.636910
2022-10-15 15:06:39,489:INFO: Epoch: 3/5, Step: 150/156, Loss: 0.029944, Time/step: 0.580910
2022-10-15 15:06:40,673:INFO: Epoch: 3/5, Step: 152/156, Loss: 0.021172, Time/step: 0.587179
2022-10-15 15:06:41,878:INFO: Epoch: 3/5, Step: 154/156, Loss: 0.038082, Time/step: 0.597944
2022-10-15 15:06:43,011:INFO: Epoch: 3/5, Step: 156/156, Loss: 0.083354, Time/step: 0.561740
2022-10-15 15:06:43,364:INFO: Epoch 3/5 Finished, Train Loss: 0.050380
2022-10-15 15:10:29,486:INFO: sim matrix size: 4917, 4917
2022-10-15 15:10:30,141:INFO: Length-T: 4917, Length-V:4917
2022-10-15 15:10:30,141:INFO: Text-to-Video:
2022-10-15 15:10:30,141:INFO: >>> R@1: 37.2 - R@5: 69.0 - R@10: 81.5 - Median R: 2.0 - Mean R: 8.6
2022-10-15 15:10:30,141:INFO: Video-to-Text:
2022-10-15 15:10:30,141:INFO: >>> V2T$R@1: 39.1 - V2T$R@5: 70.8 - V2T$R@10: 82.8 - V2T$Median R: 2.0 - V2T$Mean R: 7.7
2022-10-15 15:10:30,160:INFO: the R1 is: 37.2382
2022-10-15 15:10:37,078:INFO: Epoch: 4/5, Step: 2/156, Loss: 0.049338, Time/step: 2.417915
2022-10-15 15:10:38,354:INFO: Epoch: 4/5, Step: 4/156, Loss: 0.040763, Time/step: 0.626952
2022-10-15 15:10:39,564:INFO: Epoch: 4/5, Step: 6/156, Loss: 0.023310, Time/step: 0.598358
2022-10-15 15:10:40,888:INFO: Epoch: 4/5, Step: 8/156, Loss: 0.023442, Time/step: 0.654990
2022-10-15 15:10:42,294:INFO: Epoch: 4/5, Step: 10/156, Loss: 0.006237, Time/step: 0.695586
2022-10-15 15:10:43,725:INFO: Epoch: 4/5, Step: 12/156, Loss: 0.046411, Time/step: 0.707955
2022-10-15 15:10:45,115:INFO: Epoch: 4/5, Step: 14/156, Loss: 0.044159, Time/step: 0.687712
2022-10-15 15:10:46,463:INFO: Epoch: 4/5, Step: 16/156, Loss: 0.066451, Time/step: 0.666678
2022-10-15 15:10:47,792:INFO: Epoch: 4/5, Step: 18/156, Loss: 0.043367, Time/step: 0.657577
2022-10-15 15:10:49,107:INFO: Epoch: 4/5, Step: 20/156, Loss: 0.047439, Time/step: 0.651080
2022-10-15 15:10:50,477:INFO: Epoch: 4/5, Step: 22/156, Loss: 0.017368, Time/step: 0.679134
2022-10-15 15:10:51,824:INFO: Epoch: 4/5, Step: 24/156, Loss: 0.054065, Time/step: 0.666671
2022-10-15 15:10:53,139:INFO: Epoch: 4/5, Step: 26/156, Loss: 0.046446, Time/step: 0.651379
2022-10-15 15:10:54,521:INFO: Epoch: 4/5, Step: 28/156, Loss: 0.026123, Time/step: 0.684522
2022-10-15 15:10:55,849:INFO: Epoch: 4/5, Step: 30/156, Loss: 0.065775, Time/step: 0.656928
2022-10-15 15:10:57,229:INFO: Epoch: 4/5, Step: 32/156, Loss: 0.047831, Time/step: 0.683688
2022-10-15 15:10:58,687:INFO: Epoch: 4/5, Step: 34/156, Loss: 0.058841, Time/step: 0.722941
2022-10-15 15:11:00,072:INFO: Epoch: 4/5, Step: 36/156, Loss: 0.053158, Time/step: 0.685537
2022-10-15 15:11:01,418:INFO: Epoch: 4/5, Step: 38/156, Loss: 0.028982, Time/step: 0.666752
2022-10-15 15:11:02,657:INFO: Epoch: 4/5, Step: 40/156, Loss: 0.039521, Time/step: 0.613248
2022-10-15 15:11:04,038:INFO: Epoch: 4/5, Step: 42/156, Loss: 0.061108, Time/step: 0.684034
2022-10-15 15:11:05,407:INFO: Epoch: 4/5, Step: 44/156, Loss: 0.011418, Time/step: 0.678607
2022-10-15 15:11:06,816:INFO: Epoch: 4/5, Step: 46/156, Loss: 0.029791, Time/step: 0.697697
2022-10-15 15:11:08,126:INFO: Epoch: 4/5, Step: 48/156, Loss: 0.053806, Time/step: 0.648838
2022-10-15 15:11:09,486:INFO: Epoch: 4/5, Step: 50/156, Loss: 0.038702, Time/step: 0.673731
2022-10-15 15:11:10,847:INFO: Epoch: 4/5, Step: 52/156, Loss: 0.057297, Time/step: 0.674531
2022-10-15 15:11:12,197:INFO: Epoch: 4/5, Step: 54/156, Loss: 0.015364, Time/step: 0.667734
2022-10-15 15:11:13,605:INFO: Epoch: 4/5, Step: 56/156, Loss: 0.041655, Time/step: 0.697554
2022-10-15 15:11:14,984:INFO: Epoch: 4/5, Step: 58/156, Loss: 0.026789, Time/step: 0.683029
2022-10-15 15:11:16,233:INFO: Epoch: 4/5, Step: 60/156, Loss: 0.038885, Time/step: 0.617998
2022-10-15 15:11:17,625:INFO: Epoch: 4/5, Step: 62/156, Loss: 0.030300, Time/step: 0.690254
2022-10-15 15:11:18,991:INFO: Epoch: 4/5, Step: 64/156, Loss: 0.018416, Time/step: 0.676715
2022-10-15 15:11:20,351:INFO: Epoch: 4/5, Step: 66/156, Loss: 0.009614, Time/step: 0.674047
2022-10-15 15:11:21,727:INFO: Epoch: 4/5, Step: 68/156, Loss: 0.038349, Time/step: 0.681974
2022-10-15 15:11:23,040:INFO: Epoch: 4/5, Step: 70/156, Loss: 0.063675, Time/step: 0.650150
2022-10-15 15:11:24,314:INFO: Epoch: 4/5, Step: 72/156, Loss: 0.028690, Time/step: 0.629937
2022-10-15 15:11:25,613:INFO: Epoch: 4/5, Step: 74/156, Loss: 0.035552, Time/step: 0.642427
2022-10-15 15:11:26,932:INFO: Epoch: 4/5, Step: 76/156, Loss: 0.047680, Time/step: 0.653613
2022-10-15 15:11:28,366:INFO: Epoch: 4/5, Step: 78/156, Loss: 0.014432, Time/step: 0.710350
2022-10-15 15:11:29,692:INFO: Epoch: 4/5, Step: 80/156, Loss: 0.015809, Time/step: 0.656113
2022-10-15 15:11:31,030:INFO: Epoch: 4/5, Step: 82/156, Loss: 0.038297, Time/step: 0.662725
2022-10-15 15:11:32,354:INFO: Epoch: 4/5, Step: 84/156, Loss: 0.049162, Time/step: 0.654445
2022-10-15 15:11:33,755:INFO: Epoch: 4/5, Step: 86/156, Loss: 0.029629, Time/step: 0.693454
2022-10-15 15:11:35,102:INFO: Epoch: 4/5, Step: 88/156, Loss: 0.071280, Time/step: 0.666469
2022-10-15 15:11:36,411:INFO: Epoch: 4/5, Step: 90/156, Loss: 0.014148, Time/step: 0.648369
2022-10-15 15:11:37,819:INFO: Epoch: 4/5, Step: 92/156, Loss: 0.055634, Time/step: 0.697664
2022-10-15 15:11:39,165:INFO: Epoch: 4/5, Step: 94/156, Loss: 0.027941, Time/step: 0.666490
2022-10-15 15:11:40,520:INFO: Epoch: 4/5, Step: 96/156, Loss: 0.035232, Time/step: 0.670357
2022-10-15 15:11:41,915:INFO: Epoch: 4/5, Step: 98/156, Loss: 0.005500, Time/step: 0.691488
2022-10-15 15:11:43,248:INFO: Epoch: 4/5, Step: 100/156, Loss: 0.045643, Time/step: 0.659878
2022-10-15 15:11:44,642:INFO: Epoch: 4/5, Step: 102/156, Loss: 0.040300, Time/step: 0.690210
2022-10-15 15:11:46,020:INFO: Epoch: 4/5, Step: 104/156, Loss: 0.032091, Time/step: 0.682400
2022-10-15 15:11:47,316:INFO: Epoch: 4/5, Step: 106/156, Loss: 0.037505, Time/step: 0.641528
2022-10-15 15:11:48,693:INFO: Epoch: 4/5, Step: 108/156, Loss: 0.047304, Time/step: 0.681775
2022-10-15 15:11:50,088:INFO: Epoch: 4/5, Step: 110/156, Loss: 0.027232, Time/step: 0.691029
2022-10-15 15:11:51,491:INFO: Epoch: 4/5, Step: 112/156, Loss: 0.032627, Time/step: 0.694251
2022-10-15 15:11:52,806:INFO: Epoch: 4/5, Step: 114/156, Loss: 0.044435, Time/step: 0.650261
2022-10-15 15:11:54,123:INFO: Epoch: 4/5, Step: 116/156, Loss: 0.051618, Time/step: 0.651658
2022-10-15 15:11:55,437:INFO: Epoch: 4/5, Step: 118/156, Loss: 0.044102, Time/step: 0.650829
2022-10-15 15:11:56,764:INFO: Epoch: 4/5, Step: 120/156, Loss: 0.056609, Time/step: 0.656965
2022-10-15 15:11:58,068:INFO: Epoch: 4/5, Step: 122/156, Loss: 0.037130, Time/step: 0.645459
2022-10-15 15:11:59,524:INFO: Epoch: 4/5, Step: 124/156, Loss: 0.034454, Time/step: 0.721373
2022-10-15 15:12:00,914:INFO: Epoch: 4/5, Step: 126/156, Loss: 0.025431, Time/step: 0.685210
2022-10-15 15:12:02,305:INFO: Epoch: 4/5, Step: 128/156, Loss: 0.034563, Time/step: 0.688490
2022-10-15 15:12:03,612:INFO: Epoch: 4/5, Step: 130/156, Loss: 0.041220, Time/step: 0.647041
2022-10-15 15:12:04,960:INFO: Epoch: 4/5, Step: 132/156, Loss: 0.040192, Time/step: 0.667461
2022-10-15 15:12:06,262:INFO: Epoch: 4/5, Step: 134/156, Loss: 0.046700, Time/step: 0.644125
2022-10-15 15:12:07,611:INFO: Epoch: 4/5, Step: 136/156, Loss: 0.007984, Time/step: 0.667781
2022-10-15 15:12:08,959:INFO: Epoch: 4/5, Step: 138/156, Loss: 0.022535, Time/step: 0.667322
2022-10-15 15:12:10,293:INFO: Epoch: 4/5, Step: 140/156, Loss: 0.046352, Time/step: 0.659954
2022-10-15 15:12:11,548:INFO: Epoch: 4/5, Step: 142/156, Loss: 0.042150, Time/step: 0.620445
2022-10-15 15:12:12,727:INFO: Epoch: 4/5, Step: 144/156, Loss: 0.053316, Time/step: 0.583445
2022-10-15 15:12:13,920:INFO: Epoch: 4/5, Step: 146/156, Loss: 0.031385, Time/step: 0.590484
2022-10-15 15:12:15,105:INFO: Epoch: 4/5, Step: 148/156, Loss: 0.055244, Time/step: 0.586429
2022-10-15 15:12:16,252:INFO: Epoch: 4/5, Step: 150/156, Loss: 0.026327, Time/step: 0.567991
2022-10-15 15:12:17,377:INFO: Epoch: 4/5, Step: 152/156, Loss: 0.028458, Time/step: 0.556656
2022-10-15 15:12:18,582:INFO: Epoch: 4/5, Step: 154/156, Loss: 0.039717, Time/step: 0.596269
2022-10-15 15:12:19,752:INFO: Epoch: 4/5, Step: 156/156, Loss: 0.087024, Time/step: 0.578388
2022-10-15 15:12:20,116:INFO: Epoch 4/5 Finished, Train Loss: 0.039017
2022-10-15 15:16:04,238:INFO: sim matrix size: 4917, 4917
2022-10-15 15:16:05,019:INFO: Length-T: 4917, Length-V:4917
2022-10-15 15:16:05,019:INFO: Text-to-Video:
2022-10-15 15:16:05,019:INFO: >>> R@1: 37.7 - R@5: 69.7 - R@10: 82.1 - Median R: 2.0 - Mean R: 8.3
2022-10-15 15:16:05,019:INFO: Video-to-Text:
2022-10-15 15:16:05,019:INFO: >>> V2T$R@1: 39.7 - V2T$R@5: 71.1 - V2T$R@10: 83.4 - V2T$Median R: 2.0 - V2T$Mean R: 7.5
2022-10-15 15:16:05,040:INFO: the R1 is: 37.7466
2022-10-15 15:19:59,983:INFO: Epoch: 5/5, Step: 2/156, Loss: 0.073106, Time/step: 6.245923
2022-10-15 15:20:01,236:INFO: Epoch: 5/5, Step: 4/156, Loss: 0.028643, Time/step: 0.617921
2022-10-15 15:20:02,624:INFO: Epoch: 5/5, Step: 6/156, Loss: 0.029775, Time/step: 0.687356
2022-10-15 15:20:03,835:INFO: Epoch: 5/5, Step: 8/156, Loss: 0.047313, Time/step: 0.598373
2022-10-15 15:20:05,226:INFO: Epoch: 5/5, Step: 10/156, Loss: 0.035779, Time/step: 0.689709
2022-10-15 15:20:06,642:INFO: Epoch: 5/5, Step: 12/156, Loss: 0.024824, Time/step: 0.700931
2022-10-15 15:20:07,994:INFO: Epoch: 5/5, Step: 14/156, Loss: 0.009868, Time/step: 0.669466
2022-10-15 15:20:09,358:INFO: Epoch: 5/5, Step: 16/156, Loss: 0.060329, Time/step: 0.675002
2022-10-15 15:20:10,729:INFO: Epoch: 5/5, Step: 18/156, Loss: 0.034017, Time/step: 0.678178
2022-10-15 15:20:12,136:INFO: Epoch: 5/5, Step: 20/156, Loss: 0.021547, Time/step: 0.696432
2022-10-15 15:20:13,660:INFO: Epoch: 5/5, Step: 22/156, Loss: 0.030214, Time/step: 0.755370
2022-10-15 15:20:15,036:INFO: Epoch: 5/5, Step: 24/156, Loss: 0.095531, Time/step: 0.680689
2022-10-15 15:20:16,507:INFO: Epoch: 5/5, Step: 26/156, Loss: 0.046146, Time/step: 0.729195
2022-10-15 15:20:17,980:INFO: Epoch: 5/5, Step: 28/156, Loss: 0.037638, Time/step: 0.729500
2022-10-15 15:20:19,389:INFO: Epoch: 5/5, Step: 30/156, Loss: 0.040964, Time/step: 0.697824
2022-10-15 15:20:20,687:INFO: Epoch: 5/5, Step: 32/156, Loss: 0.016231, Time/step: 0.641954
2022-10-15 15:20:22,067:INFO: Epoch: 5/5, Step: 34/156, Loss: 0.018986, Time/step: 0.682041
2022-10-15 15:20:23,489:INFO: Epoch: 5/5, Step: 36/156, Loss: 0.027519, Time/step: 0.704314
2022-10-15 15:20:25,113:INFO: Epoch: 5/5, Step: 38/156, Loss: 0.022617, Time/step: 0.805288
2022-10-15 15:20:26,406:INFO: Epoch: 5/5, Step: 40/156, Loss: 0.020056, Time/step: 0.639323
2022-10-15 15:20:27,804:INFO: Epoch: 5/5, Step: 42/156, Loss: 0.067451, Time/step: 0.691887
2022-10-15 15:20:29,244:INFO: Epoch: 5/5, Step: 44/156, Loss: 0.048023, Time/step: 0.713658
2022-10-15 15:20:30,754:INFO: Epoch: 5/5, Step: 46/156, Loss: 0.020504, Time/step: 0.744392
2022-10-15 15:20:32,076:INFO: Epoch: 5/5, Step: 48/156, Loss: 0.058753, Time/step: 0.654085
2022-10-15 15:20:33,484:INFO: Epoch: 5/5, Step: 50/156, Loss: 0.030354, Time/step: 0.697032
2022-10-15 15:20:34,851:INFO: Epoch: 5/5, Step: 52/156, Loss: 0.026091, Time/step: 0.677095
2022-10-15 15:20:36,259:INFO: Epoch: 5/5, Step: 54/156, Loss: 0.027559, Time/step: 0.693889
2022-10-15 15:20:37,687:INFO: Epoch: 5/5, Step: 56/156, Loss: 0.062357, Time/step: 0.707092
2022-10-15 15:20:39,111:INFO: Epoch: 5/5, Step: 58/156, Loss: 0.023294, Time/step: 0.704541
2022-10-15 15:20:40,396:INFO: Epoch: 5/5, Step: 60/156, Loss: 0.019698, Time/step: 0.634988
2022-10-15 15:20:41,847:INFO: Epoch: 5/5, Step: 62/156, Loss: 0.047098, Time/step: 0.718791
2022-10-15 15:20:43,251:INFO: Epoch: 5/5, Step: 64/156, Loss: 0.067783, Time/step: 0.692128
2022-10-15 15:20:44,754:INFO: Epoch: 5/5, Step: 66/156, Loss: 0.081722, Time/step: 0.744500
2022-10-15 15:20:46,185:INFO: Epoch: 5/5, Step: 68/156, Loss: 0.017657, Time/step: 0.707238
2022-10-15 15:20:47,497:INFO: Epoch: 5/5, Step: 70/156, Loss: 0.035396, Time/step: 0.648890
2022-10-15 15:20:48,843:INFO: Epoch: 5/5, Step: 72/156, Loss: 0.026082, Time/step: 0.666667
2022-10-15 15:20:50,291:INFO: Epoch: 5/5, Step: 74/156, Loss: 0.047869, Time/step: 0.712117
2022-10-15 15:20:51,715:INFO: Epoch: 5/5, Step: 76/156, Loss: 0.060858, Time/step: 0.702752
2022-10-15 15:20:53,160:INFO: Epoch: 5/5, Step: 78/156, Loss: 0.008839, Time/step: 0.711304
2022-10-15 15:20:54,470:INFO: Epoch: 5/5, Step: 80/156, Loss: 0.022211, Time/step: 0.648446
2022-10-15 15:20:55,812:INFO: Epoch: 5/5, Step: 82/156, Loss: 0.066965, Time/step: 0.664266
2022-10-15 15:20:57,206:INFO: Epoch: 5/5, Step: 84/156, Loss: 0.036229, Time/step: 0.689181
2022-10-15 15:20:58,758:INFO: Epoch: 5/5, Step: 86/156, Loss: 0.040857, Time/step: 0.768692
2022-10-15 15:21:00,028:INFO: Epoch: 5/5, Step: 88/156, Loss: 0.075374, Time/step: 0.625862
2022-10-15 15:21:01,533:INFO: Epoch: 5/5, Step: 90/156, Loss: 0.059973, Time/step: 0.744385
2022-10-15 15:21:02,928:INFO: Epoch: 5/5, Step: 92/156, Loss: 0.052909, Time/step: 0.688263
2022-10-15 15:21:04,292:INFO: Epoch: 5/5, Step: 94/156, Loss: 0.071438, Time/step: 0.675001
2022-10-15 15:21:05,671:INFO: Epoch: 5/5, Step: 96/156, Loss: 0.019555, Time/step: 0.682516
2022-10-15 15:21:07,103:INFO: Epoch: 5/5, Step: 98/156, Loss: 0.012402, Time/step: 0.707956
2022-10-15 15:21:08,421:INFO: Epoch: 5/5, Step: 100/156, Loss: 0.032941, Time/step: 0.648888
2022-10-15 15:21:09,916:INFO: Epoch: 5/5, Step: 102/156, Loss: 0.035246, Time/step: 0.739881
2022-10-15 15:21:11,276:INFO: Epoch: 5/5, Step: 104/156, Loss: 0.027741, Time/step: 0.672727
2022-10-15 15:21:12,691:INFO: Epoch: 5/5, Step: 106/156, Loss: 0.036147, Time/step: 0.700546
2022-10-15 15:21:14,247:INFO: Epoch: 5/5, Step: 108/156, Loss: 0.060697, Time/step: 0.770020
2022-10-15 15:21:15,768:INFO: Epoch: 5/5, Step: 110/156, Loss: 0.053094, Time/step: 0.753674
2022-10-15 15:21:17,099:INFO: Epoch: 5/5, Step: 112/156, Loss: 0.017540, Time/step: 0.658798
2022-10-15 15:21:18,429:INFO: Epoch: 5/5, Step: 114/156, Loss: 0.027693, Time/step: 0.657966
2022-10-15 15:21:19,800:INFO: Epoch: 5/5, Step: 116/156, Loss: 0.011643, Time/step: 0.676374
2022-10-15 15:21:21,388:INFO: Epoch: 5/5, Step: 118/156, Loss: 0.055007, Time/step: 0.786572
2022-10-15 15:21:22,683:INFO: Epoch: 5/5, Step: 120/156, Loss: 0.050078, Time/step: 0.639941
2022-10-15 15:21:23,956:INFO: Epoch: 5/5, Step: 122/156, Loss: 0.023774, Time/step: 0.629297
2022-10-15 15:21:25,434:INFO: Epoch: 5/5, Step: 124/156, Loss: 0.074216, Time/step: 0.732065
2022-10-15 15:21:26,972:INFO: Epoch: 5/5, Step: 126/156, Loss: 0.031335, Time/step: 0.762068
2022-10-15 15:21:28,281:INFO: Epoch: 5/5, Step: 128/156, Loss: 0.036979, Time/step: 0.647615
2022-10-15 15:21:29,683:INFO: Epoch: 5/5, Step: 130/156, Loss: 0.030935, Time/step: 0.694098
2022-10-15 15:21:31,076:INFO: Epoch: 5/5, Step: 132/156, Loss: 0.037739, Time/step: 0.689032
2022-10-15 15:21:32,563:INFO: Epoch: 5/5, Step: 134/156, Loss: 0.049480, Time/step: 0.736055
2022-10-15 15:21:33,909:INFO: Epoch: 5/5, Step: 136/156, Loss: 0.046320, Time/step: 0.665456
2022-10-15 15:21:35,328:INFO: Epoch: 5/5, Step: 138/156, Loss: 0.026453, Time/step: 0.702473
2022-10-15 15:21:36,743:INFO: Epoch: 5/5, Step: 140/156, Loss: 0.032663, Time/step: 0.700354
2022-10-15 15:21:38,063:INFO: Epoch: 5/5, Step: 142/156, Loss: 0.034817, Time/step: 0.653547
2022-10-15 15:21:39,258:INFO: Epoch: 5/5, Step: 144/156, Loss: 0.034401, Time/step: 0.590997
2022-10-15 15:21:40,477:INFO: Epoch: 5/5, Step: 146/156, Loss: 0.022925, Time/step: 0.603869
2022-10-15 15:21:41,692:INFO: Epoch: 5/5, Step: 148/156, Loss: 0.029994, Time/step: 0.602233
2022-10-15 15:21:42,915:INFO: Epoch: 5/5, Step: 150/156, Loss: 0.024137, Time/step: 0.605916
2022-10-15 15:21:44,104:INFO: Epoch: 5/5, Step: 152/156, Loss: 0.024262, Time/step: 0.589005
2022-10-15 15:21:45,291:INFO: Epoch: 5/5, Step: 154/156, Loss: 0.023934, Time/step: 0.588372
2022-10-15 15:21:46,500:INFO: Epoch: 5/5, Step: 156/156, Loss: 0.061111, Time/step: 0.598581
2022-10-15 15:21:46,969:INFO: Epoch 5/5 Finished, Train Loss: 0.039250
2022-10-15 15:25:31,627:INFO: sim matrix size: 4917, 4917
2022-10-15 15:25:32,611:INFO: Length-T: 4917, Length-V:4917
2022-10-15 15:25:32,612:INFO: Text-to-Video:
2022-10-15 15:25:32,612:INFO: >>> R@1: 37.9 - R@5: 69.9 - R@10: 82.2 - Median R: 2.0 - Mean R: 8.3
2022-10-15 15:25:32,612:INFO: Video-to-Text:
2022-10-15 15:25:32,612:INFO: >>> V2T$R@1: 39.8 - V2T$R@5: 71.1 - V2T$R@10: 83.4 - V2T$Median R: 2.0 - V2T$Mean R: 7.5
2022-10-15 15:25:32,632:INFO: the R1 is: 37.9296
Hyper-parameters:
do_pretrain: false
do_train: 1
do_eval: 0
train_csv: data/.train.csv
val_csv: data/.val.csv
data_path: data/caption.pickle
features_path: xxx/ActivityNet/images/fps_1
anno_path: xxx/ActivityNet/anns
train7k: false
workers: 8
lr: 0.0001
weight_decay: 0.2
epochs: 5
batch_size: 64
batch_size_val: 32
lr_decay: 0.9
n_display: 1
video_dim: 1024
seed: 42
max_words: 64
max_frames: 64
feature_framerate: 1
margin: 0.1
hard_negative_rate: 0.5
negative_weighting: 1
n_pair: 1
output_dir: xxx
cross_model: cross-base
init_model: null
resume_model: null
do_lower_case: false
warmup_proportion: 0.1
gradient_accumulation_steps: 2
n_gpu: 8
cache_dir: ''
fp16: false
fp16_opt_level: O1
task_type: retrieval
datatype: activity
world_size: 8
local_rank: 0
rank: 0
coef_lr: 1.0
use_mil: false
sampled_use_mil: false
text_num_hidden_layers: 12
visual_num_hidden_layers: 12
cross_num_hidden_layers: 4
loose_type: true
expand_msrvtt_sentences: 1
train_frame_order: 0
eval_frame_order: 0
freeze_layer_num: 0
slice_framepos: 2
linear_patch: 2d
sim_header: meanP
pretrained_clip_name: ViT-B/32
en_wandb: 0
debug: 0
no_resume: 0
jcl: 0
sigma_lambda: 1
hf: 0
vsim_temp: 0.1
intra_sim: 0
text_cond_intra_sim: 0
intra_sim_logit_scale: 0
mq_test: 0
no_expand_type: all
optimizer: adam
clip_lr: 1.0e-07
linspace_samp: null
v_emb_norm: 0
t_emb_norm: 0
v_agg_norm: 0
dataset_type: drl
train_augment: false
horizontal_flip: false
commit: not_set
Hi @jianghaojun, thanks for your log. I find the --coef_lr 1.0, can you test with --coef_lr 1e-3
? Best~
@ArrowLuo Actually, I directly set the lr=1e-7 by the args.clip_lr. The original lr is also 1e-7(args.lr * args.coef_lr).
Sorry for the unclearness.
Hi @jianghaojun, okay. The last difference I can find is the batch size for one iteration, which will affect the number of negative samples. Note that the number of negative samples will not be affected by the gradient_accumulation_steps but by the true batch size (refer to here). Honestly, I do not know its impact on the final result.
Hi @jianghaojun, I find you may use a different optimizer because you did not print the Lr as our released code.
@ArrowLuo Thanks for your suggestions. I will try to train with bs=128.
I also use BertAdam which is same as yours.
@ArrowLuo I tried training with bs=128. It is much better than bs=64. However, it is still slightly lower than the reported R@1=40.5.
Btw, training with bs=128 is really costly. T_T.
*************** Text-to-Video ****************
epoch 1: R@1: 35.1 - R@5: 65.5 - R@10: 79.1 - Median R: 3.0 - Mean R: 9.8, Train Loss: 0.515252
epoch 2: R@1: 37.1 - R@5: 68.8 - R@10: 82.0 - Median R: 2.0 - Mean R: 8.6, Train Loss: 0.236098
epoch 3: R@1: 39.1 - R@5: 70.7 - R@10: 83.0 - Median R: 2.0 - Mean R: 8.1, Train Loss: 0.155150
epoch 4: R@1: 39.5 - R@5: 71.4 - R@10: 83.3 - Median R: 2.0 - Mean R: 7.9, Train Loss: 0.155149
epoch 5: R@1: 39.7 - R@5: 71.4 - R@10: 83.4 - Median R: 2.0 - Mean R: 7.9, Train Loss: 0.116690
*************** Video-to-Text ****************
epoch 1: V2T$R@1: 37.7 - V2T$R@5: 68.0 - V2T$R@10: 80.9 - V2T$Median R: 2.0 - V2T$Mean R: 8.9
epoch 2: V2T$R@1: 40.2 - V2T$R@5: 70.9 - V2T$R@10: 83.1 - V2T$Median R: 2.0 - V2T$Mean R: 7.8
epoch 3: V2T$R@1: 41.0 - V2T$R@5: 72.2 - V2T$R@10: 84.3 - V2T$Median R: 2.0 - V2T$Mean R: 7.1
epoch 4: V2T$R@1: 41.4 - V2T$R@5: 72.4 - V2T$R@10: 84.9 - V2T$Median R: 2.0 - V2T$Mean R: 7.1
epoch 5: V2T$R@1: 41.4 - V2T$R@5: 72.5 - V2T$R@10: 84.9 - V2T$Median R: 2.0 - V2T$Mean R: 7.1
Hi @jianghaojun, it is good news for the effectiveness of the bigger batch size. It is indeed time-consuming. I rechecked your log and could not find anything different. I am not sure the gap is reasonable due to the problem of stable reproduction.
Hi, @ArrowLuo. Thanks for your effort!
Maybe the problem comes from the small size of the dataset. I find that different GPU devices will lead to performance differences. I guess other factors, like the PyTorch version, will also cause performance differences.
@ArrowLuo Hi, I directly train the CLIP4clip(meanP) on ActivityNet and get R@1=37.9 which is much worse than 40.5 reported in Table 4.
I extracted images from the original videos with FPS=1, and trained the CLIP4clip(meanP) on 8 RTX3090. Due to the GPU memory constrain, I set the gradient_accumulation_steps=2.
The caption is downloaded from https://cs.stanford.edu/people/ranjaykrishna/densevid/.