RL-VIG / LibFewShot

LibFewShot: A Comprehensive Library for Few-shot Learning. TPAMI 2023.
MIT License
894 stars 175 forks source link

DeepBDC方法咨询 #71

Closed MasterWinston closed 1 year ago

MasterWinston commented 1 year ago

你好!首先感谢作者对代码库的不断更新,我最近试了下新增的DeepBDC方法,我按照reproduce中给的设置在miniImagenet上DeepBDC_pretrain(val type:stl)训练170epoch后,1-shot测试best_acc仅59.7,低于RFS的61.65,并且最后一个epoch的train_acc也才65.76。请问这是正常结果吗? 我在参考之前的RFS方法中的训练log文件时注意到在最后一个epoch的train_acc已经达到了99+。

wZuck commented 1 year ago

你好,感谢关注。我检查了方法的训练日志,应该是不正常的。该方法的Contributor是 @stu-yue,请问方便一起看一下问题吗?

stu-yue commented 1 year ago

@MasterWinston 你好,感谢提出的问题。我昨天测试了一下,没有出现train_acc才65.76的情况,请问方便提供一下训练日志和config文件吗?

MasterWinston commented 1 year ago

好的。我把config内容和训练日志的开头和结束部分放到这里。PS:我的训练阶段的最后1个epoch(169epoch)的test阶段的最后100个episode的训练日志信息缺失。

———————————————config———————————— augment: true augment_method: DeepBdcAugment augment_times: 1 augment_times_query: 1 backbone: kwargs: reduce_dim: 128 name: resnet12Bdc batch_size: 64 classifier: kwargs: dropout_rate: 0.5 is_distill: false num_class: 64 penalty_C: 2 reduce_dim: 128 teacher_path: null val_type: stl name: DeepBDC_Pretrain data_root: E:/wansongbo/LibFewShot/data/miniImageNet--ravi dataloader_num: 1 deterministic: true device_ids: 0 episode_size: 1 epoch: 170 image_size: 84 includes:

Parameter Group 1 dampening: 0 initial_lr: 0.05 lr: 0.05 momentum: 0.9 nesterov: True weight_decay: 0.0005 ) 2023-03-04 15:19:24,052 [INFO] core.trainer: ============ Train on the train set ============ 2023-03-04 15:19:24,053 [INFO] core.trainer: learning rate: [0.001, 0.05] 2023-03-04 15:19:56,447 [INFO] core.trainer: Epoch-(0): [100/600] Time 0.260 (0.324) Calc 0.037 (0.067) Data 0.166 (0.200) Loss 3.936 (3.999) Acc@1 7.812 (5.719) 2023-03-04 15:20:25,001 [INFO] core.trainer: Epoch-(0): [200/600] Time 0.264 (0.304) Calc 0.038 (0.053) Data 0.170 (0.194) Loss 3.346 (3.868) Acc@1 14.062 (7.398) 2023-03-04 15:20:53,542 [INFO] core.trainer: Epoch-(0): [300/600] Time 0.267 (0.298) Calc 0.039 (0.048) Data 0.173 (0.193) Loss 3.579 (3.781) Acc@1 7.812 (9.089) 2023-03-04 15:21:22,235 [INFO] core.trainer: Epoch-(0): [400/600] Time 0.282 (0.295) Calc 0.040 (0.046) Data 0.185 (0.192) Loss 3.487 (3.699) Acc@1 14.062 (10.500) 2023-03-04 15:21:51,283 [INFO] core.trainer: Epoch-(0): [500/600] Time 0.260 (0.294) Calc 0.039 (0.044) Data 0.164 (0.193) Loss 3.403 (3.629) Acc@1 10.938 (11.784) 2023-03-04 15:22:19,570 [INFO] core.trainer: Epoch-(0): [600/600] Time 0.263 (0.292) Calc 0.039 (0.044) Data 0.167 (0.192) Loss 3.324 (3.561) Acc@1 15.625 (13.065) 2023-03-04 15:22:19,573 [INFO] core.trainer: Acc@1 13.065 2023-03-04 15:22:19,574 [INFO] core.trainer: ============ Validation on the val set ============ 2023-03-04 15:22:58,631 [INFO] core.trainer: Epoch-(0): [100/600] Time 0.400 (0.390) Calc 0.135 (0.129) Data 0.264 (0.260) Acc@1 20.000 (36.160) 2023-03-04 15:23:37,333 [INFO] core.trainer: Epoch-(0): [200/600] Time 0.401 (0.388) Calc 0.119 (0.128) Data 0.281 (0.259) Acc@1 29.333 (36.287) 2023-03-04 15:24:15,916 [INFO] core.trainer: Epoch-(0): [300/600] Time 0.410 (0.387) Calc 0.127 (0.128) Data 0.281 (0.258) Acc@1 26.667 (36.458) 2023-03-04 15:24:55,608 [INFO] core.trainer: Epoch-(0): [400/600] Time 0.481 (0.390) Calc 0.168 (0.129) Data 0.311 (0.259) Acc@1 33.333 (36.107) 2023-03-04 15:25:35,178 [INFO] core.trainer: Epoch-(0): [500/600] Time 0.412 (0.391) Calc 0.121 (0.129) Data 0.290 (0.260) Acc@1 33.333 (36.104) 2023-03-04 15:26:14,163 [INFO] core.trainer: Epoch-(0): [600/600] Time 0.384 (0.391) Calc 0.121 (0.129) Data 0.261 (0.260) Acc@1 42.667 (36.327) 2023-03-04 15:26:14,165 [INFO] core.trainer: Acc@1 36.327 Best acc -inf 2023-03-04 15:26:14,166 [INFO] core.trainer: ============ Testing on the test set ============ 2023-03-04 15:26:53,503 [INFO] core.trainer: Epoch-(0): [100/600] Time 0.370 (0.393) Calc 0.140 (0.131) Data 0.228 (0.261) Acc@1 26.667 (38.760) 2023-03-04 15:27:31,553 [INFO] core.trainer: Epoch-(0): [200/600] Time 0.360 (0.387) Calc 0.122 (0.132) Data 0.237 (0.254) Acc@1 49.333 (38.687) 2023-03-04 15:28:08,840 [INFO] core.trainer: Epoch-(0): [300/600] Time 0.378 (0.382) Calc 0.120 (0.131) Data 0.257 (0.249) Acc@1 50.667 (37.960) 2023-03-04 15:28:47,442 [INFO] core.trainer: Epoch-(0): [400/600] Time 0.368 (0.383) Calc 0.154 (0.131) Data 0.212 (0.250) Acc@1 22.667 (38.843) 2023-03-04 15:29:24,847 [INFO] core.trainer: Epoch-(0): [500/600] Time 0.391 (0.381) Calc 0.151 (0.132) Data 0.239 (0.248) Acc@1 29.333 (39.189) 2023-03-04 15:30:02,770 [INFO] core.trainer: Epoch-(0): [600/600] Time 0.399 (0.381) Calc 0.129 (0.132) Data 0.268 (0.248) Acc@1 38.667 (39.016) 2023-03-04 15:30:02,772 [INFO] core.trainer: Acc@1 39.016 Best acc -inf 2023-03-04 15:30:02,773 [INFO] core.trainer: Time: 0:10:38/1 day, 6:07:40 2023-03-04 15:30:02,882 [WARNING] core.trainer: cls_classifier is not included in DeepBDC_Pretrain 2023-03-04 15:30:02,987 [INFO] core.trainer: ============ Train on the train set ============ 2023-03-04 15:30:02,988 [INFO] core.trainer: learning rate: [0.001, 0.05] 2023-03-04 15:30:31,952 [INFO] core.trainer: Epoch-(1): [100/600] Time 0.291 (0.289) Calc 0.038 (0.039) Data 0.197 (0.192) Loss 2.850 (3.080) Acc@1 26.562 (22.438) 2023-03-04 15:31:00,154 [INFO] core.trainer: Epoch-(1): [200/600] Time 0.249 (0.286) Calc 0.038 (0.039) Data 0.154 (0.189) Loss 2.678 (3.034) Acc@1 35.938 (23.148) 2023-03-04 15:31:29,020 [INFO] core.trainer: Epoch-(1): [300/600] Time 0.296 (0.286) Calc 0.038 (0.039) Data 0.201 (0.190) Loss 2.744 (2.985) Acc@1 35.938 (24.182) 2023-03-04 15:31:58,061 [INFO] core.trainer: Epoch-(1): [400/600] Time 0.275 (0.287) Calc 0.039 (0.039) Data 0.179 (0.191) Loss 3.188 (2.944) Acc@1 18.750 (25.016) 2023-03-04 15:32:26,910 [INFO] core.trainer: Epoch-(1): [500/600] Time 0.275 (0.288) Calc 0.039 (0.039) Data 0.179 (0.191) Loss 2.634 (2.902) Acc@1 29.688 (26.050) 2023-03-04 15:32:56,258 [INFO] core.trainer: Epoch-(1): [600/600] Time 0.304 (0.288) Calc 0.038 (0.039) Data 0.208 (0.192) Loss 2.606 (2.867) Acc@1 23.438 (26.781) 2023-03-04 15:32:56,261 [INFO] core.trainer: Acc@1 26.781 2023-03-04 15:32:56,262 [INFO] core.trainer: ============ Validation on the val set ============ 2023-03-04 15:33:36,192 [INFO] core.trainer: Epoch-(1): [100/600] Time 0.409 (0.399) Calc 0.119 (0.139) Data 0.289 (0.258) Acc@1 42.667 (41.307) 2023-03-04 15:34:17,812 [INFO] core.trainer: Epoch-(1): [200/600] Time 0.433 (0.407) Calc 0.158 (0.141) Data 0.273 (0.265) Acc@1 30.667 (41.933) 2023-03-04 15:34:58,543 [INFO] core.trainer: Epoch-(1): [300/600] Time 0.414 (0.407) Calc 0.134 (0.141) Data 0.279 (0.264) Acc@1 50.667 (41.849) 2023-03-04 15:35:39,429 [INFO] core.trainer: Epoch-(1): [400/600] Time 0.494 (0.408) Calc 0.160 (0.140) Data 0.333 (0.266) Acc@1 34.667 (41.763) 2023-03-04 15:36:19,105 [INFO] core.trainer: Epoch-(1): [500/600] Time 0.353 (0.405) Calc 0.118 (0.140) Data 0.232 (0.263) Acc@1 38.667 (41.933) 2023-03-04 15:36:59,012 [INFO] core.trainer: Epoch-(1): [600/600] Time 0.420 (0.404) Calc 0.145 (0.141) Data 0.275 (0.262) Acc@1 45.333 (42.224) 2023-03-04 15:36:59,014 [INFO] core.trainer: Acc@1 42.224 Best acc 36.327 2023-03-04 15:36:59,015 [INFO] core.trainer: ============ Testing on the test set ============ 2023-03-04 15:37:37,643 [INFO] core.trainer: Epoch-(1): [100/600] Time 0.369 (0.386) Calc 0.128 (0.139) Data 0.239 (0.245) Acc@1 38.667 (45.107) 2023-03-04 15:38:15,996 [INFO] core.trainer: Epoch-(1): [200/600] Time 0.511 (0.385) Calc 0.165 (0.142) Data 0.345 (0.241) Acc@1 49.333 (44.967) 2023-03-04 15:38:55,974 [INFO] core.trainer: Epoch-(1): [300/600] Time 0.446 (0.389) Calc 0.164 (0.143) Data 0.281 (0.245) Acc@1 36.000 (44.858) 2023-03-04 15:39:35,372 [INFO] core.trainer: Epoch-(1): [400/600] Time 0.394 (0.391) Calc 0.166 (0.143) Data 0.226 (0.246) Acc@1 30.667 (45.087) 2023-03-04 15:40:14,365 [INFO] core.trainer: Epoch-(1): [500/600] Time 0.384 (0.390) Calc 0.158 (0.143) Data 0.224 (0.246) Acc@1 57.333 (45.133) 2023-03-04 15:40:53,143 [INFO] core.trainer: Epoch-(1): [600/600] Time 0.378 (0.390) Calc 0.142 (0.142) Data 0.234 (0.246) Acc@1 38.667 (45.200) 2023-03-04 15:40:53,147 [INFO] core.trainer: Acc@1 45.200 Best acc 39.016 2023-03-04 15:40:53,148 [INFO] core.trainer: Time: 0:21:29/1 day, 6:26:05 2023-03-04 15:40:53,323 [WARNING] core.trainer: cls_classifier is not included in DeepBDC_Pretrain 2023-03-04 15:40:53,445 [INFO] core.trainer: ============ Train on the train set ============ …………………………………………………………………… …………………………………………………………………… …………………………………………………………………… 2023-03-05 21:15:08,546 [INFO] core.trainer: ============ Train on the train set ============ 2023-03-05 21:15:08,547 [INFO] core.trainer: learning rate: [1e-05, 0.0005000000000000001] 2023-03-05 21:15:36,974 [INFO] core.trainer: Epoch-(168): [100/600] Time 0.264 (0.284) Calc 0.039 (0.039) Data 0.171 (0.188) Loss 1.479 (1.251) Acc@1 57.812 (65.859) 2023-03-05 21:16:05,486 [INFO] core.trainer: Epoch-(168): [200/600] Time 0.280 (0.284) Calc 0.038 (0.038) Data 0.186 (0.190) Loss 1.266 (1.242) Acc@1 64.062 (66.484) 2023-03-05 21:16:34,152 [INFO] core.trainer: Epoch-(168): [300/600] Time 0.261 (0.285) Calc 0.038 (0.038) Data 0.168 (0.190) Loss 1.408 (1.251) Acc@1 65.625 (65.901) 2023-03-05 21:17:02,602 [INFO] core.trainer: Epoch-(168): [400/600] Time 0.282 (0.285) Calc 0.040 (0.038) Data 0.188 (0.190) Loss 1.441 (1.250) Acc@1 62.500 (66.035) 2023-03-05 21:17:30,937 [INFO] core.trainer: Epoch-(168): [500/600] Time 0.300 (0.284) Calc 0.039 (0.038) Data 0.205 (0.190) Loss 1.493 (1.252) Acc@1 60.938 (65.953) 2023-03-05 21:17:59,412 [INFO] core.trainer: Epoch-(168): [600/600] Time 0.270 (0.284) Calc 0.038 (0.038) Data 0.176 (0.190) Loss 1.554 (1.259) Acc@1 59.375 (65.742) 2023-03-05 21:17:59,413 [INFO] core.trainer: Acc@1 65.742 2023-03-05 21:17:59,414 [INFO] core.trainer: ============ Validation on the val set ============ 2023-03-05 21:18:38,161 [INFO] core.trainer: Epoch-(168): [100/600] Time 0.332 (0.387) Calc 0.115 (0.137) Data 0.215 (0.249) Acc@1 70.667 (59.667) 2023-03-05 21:19:17,429 [INFO] core.trainer: Epoch-(168): [200/600] Time 0.332 (0.390) Calc 0.102 (0.139) Data 0.228 (0.249) Acc@1 85.333 (59.467) 2023-03-05 21:19:56,326 [INFO] core.trainer: Epoch-(168): [300/600] Time 0.382 (0.389) Calc 0.116 (0.138) Data 0.265 (0.250) Acc@1 57.333 (59.818) 2023-03-05 21:20:35,711 [INFO] core.trainer: Epoch-(168): [400/600] Time 0.391 (0.390) Calc 0.154 (0.138) Data 0.236 (0.251) Acc@1 54.667 (59.843) 2023-03-05 21:21:14,666 [INFO] core.trainer: Epoch-(168): [500/600] Time 0.363 (0.390) Calc 0.123 (0.138) Data 0.238 (0.251) Acc@1 38.667 (59.760) 2023-03-05 21:21:53,028 [INFO] core.trainer: Epoch-(168): [600/600] Time 0.455 (0.389) Calc 0.153 (0.138) Data 0.300 (0.250) Acc@1 60.000 (59.711) 2023-03-05 21:21:53,031 [INFO] core.trainer: Acc@1 59.711 Best acc 59.382 2023-03-05 21:21:53,032 [INFO] core.trainer: ============ Testing on the test set ============ 2023-03-05 21:22:29,557 [INFO] core.trainer: Epoch-(168): [100/600] Time 0.446 (0.365) Calc 0.157 (0.137) Data 0.287 (0.226) Acc@1 57.333 (58.400) 2023-03-05 21:23:07,271 [INFO] core.trainer: Epoch-(168): [200/600] Time 0.372 (0.371) Calc 0.160 (0.137) Data 0.210 (0.232) Acc@1 42.667 (59.553) 2023-03-05 21:23:45,360 [INFO] core.trainer: Epoch-(168): [300/600] Time 0.394 (0.374) Calc 0.161 (0.138) Data 0.232 (0.235) Acc@1 62.667 (59.933) 2023-03-05 21:24:23,281 [INFO] core.trainer: Epoch-(168): [400/600] Time 0.313 (0.375) Calc 0.074 (0.139) Data 0.238 (0.235) Acc@1 57.333 (59.663) 2023-03-05 21:25:00,357 [INFO] core.trainer: Epoch-(168): [500/600] Time 0.359 (0.374) Calc 0.157 (0.138) Data 0.201 (0.235) Acc@1 58.667 (59.677) 2023-03-05 21:25:37,470 [INFO] core.trainer: Epoch-(168): [600/600] Time 0.341 (0.374) Calc 0.121 (0.139) Data 0.219 (0.234) Acc@1 58.667 (59.853) 2023-03-05 21:25:37,472 [INFO] core.trainer: Acc@1 59.853 Best acc 58.349 2023-03-05 21:25:37,473 [INFO] core.trainer: Time: 1 day, 5:13:35/1 day, 5:24:12.666667 2023-03-05 21:25:37,594 [WARNING] core.trainer: cls_classifier is not included in DeepBDC_Pretrain 2023-03-05 21:25:37,784 [INFO] core.trainer: ============ Train on the train set ============ 2023-03-05 21:25:37,785 [INFO] core.trainer: learning rate: [1e-05, 0.0005000000000000001] 2023-03-05 21:26:06,043 [INFO] core.trainer: Epoch-(169): [100/600] Time 0.263 (0.282) Calc 0.039 (0.038) Data 0.170 (0.187) Loss 1.091 (1.220) Acc@1 71.875 (66.750) 2023-03-05 21:26:34,559 [INFO] core.trainer: Epoch-(169): [200/600] Time 0.241 (0.284) Calc 0.038 (0.038) Data 0.148 (0.189) Loss 1.464 (1.241) Acc@1 60.938 (66.141) 2023-03-05 21:27:02,930 [INFO] core.trainer: Epoch-(169): [300/600] Time 0.304 (0.283) Calc 0.040 (0.038) Data 0.208 (0.189) Loss 1.161 (1.248) Acc@1 70.312 (65.958) 2023-03-05 21:27:31,140 [INFO] core.trainer: Epoch-(169): [400/600] Time 0.271 (0.283) Calc 0.038 (0.038) Data 0.177 (0.189) Loss 1.061 (1.255) Acc@1 71.875 (65.918) 2023-03-05 21:27:59,549 [INFO] core.trainer: Epoch-(169): [500/600] Time 0.266 (0.283) Calc 0.039 (0.038) Data 0.172 (0.189) Loss 0.878 (1.260) Acc@1 70.312 (65.769) 2023-03-05 21:28:28,176 [INFO] core.trainer: Epoch-(169): [600/600] Time 0.260 (0.284) Calc 0.038 (0.038) Data 0.166 (0.189) Loss 1.297 (1.263) Acc@1 65.625 (65.758) 2023-03-05 21:28:28,178 [INFO] core.trainer: Acc@1 65.758 2023-03-05 21:28:28,178 [INFO] core.trainer: ============ Validation on the val set ============ 2023-03-05 21:29:06,261 [INFO] core.trainer: Epoch-(169): [100/600] Time 0.373 (0.381) Calc 0.154 (0.136) Data 0.218 (0.243) Acc@1 73.333 (59.093) 2023-03-05 21:29:45,752 [INFO] core.trainer: Epoch-(169): [200/600] Time 0.366 (0.388) Calc 0.122 (0.137) Data 0.243 (0.249) Acc@1 37.333 (59.187) 2023-03-05 21:30:24,366 [INFO] core.trainer: Epoch-(169): [300/600] Time 0.365 (0.387) Calc 0.159 (0.138) Data 0.204 (0.247) Acc@1 72.000 (59.556) 2023-03-05 21:31:03,384 [INFO] core.trainer: Epoch-(169): [400/600] Time 0.349 (0.388) Calc 0.121 (0.138) Data 0.227 (0.248) Acc@1 68.000 (59.263) 2023-03-05 21:31:42,479 [INFO] core.trainer: Epoch-(169): [500/600] Time 0.412 (0.388) Calc 0.142 (0.138) Data 0.268 (0.249) Acc@1 56.000 (59.427) 2023-03-05 21:32:21,125 [INFO] core.trainer: Epoch-(169): [600/600] Time 0.341 (0.388) Calc 0.135 (0.138) Data 0.204 (0.248) Acc@1 49.333 (59.509) 2023-03-05 21:32:21,128 [INFO] core.trainer: Acc@1 59.509 Best acc 59.711 2023-03-05 21:32:21,128 [INFO] core.trainer: ============ Testing on the test set ============ 2023-03-05 21:32:59,091 [INFO] core.trainer: Epoch-(169): [100/600] Time 0.371 (0.379) Calc 0.144 (0.139) Data 0.226 (0.239) Acc@1 48.000 (59.387) 2023-03-05 21:33:36,437 [INFO] core.trainer: Epoch-(169): [200/600] Time 0.344 (0.376) Calc 0.117 (0.138) Data 0.226 (0.236) Acc@1 69.333 (59.367) 2023-03-05 21:34:13,297 [INFO] core.trainer: Epoch-(169): [300/600] Time 0.343 (0.374) Calc 0.133 (0.139) Data 0.208 (0.234) Acc@1 62.667 (59.342) 2023-03-05 21:34:50,871 [INFO] core.trainer: Epoch-(169): [400/600] Time 0.385 (0.374) Calc 0.120 (0.138) Data 0.264 (0.234) Acc@1 54.667 (59.177) 2023-03-05 21:35:28,295 [INFO] core.trainer: Epoch-(169): [500/600] Time 0.411 (0.374) Calc 0.153 (0.139) Data 0.256 (0.234) Acc@1 66.667 (59.491)

到这里训练日志就结束了。

MasterWinston commented 1 year ago

@stu-yue 而且我也尝试了在得到的权重上resume_train,出现了报错信息: RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory。 貌似是文件损坏了吗? 不过我可以用这个预训练文件在metaDeppBDC方法下训练优化。

stu-yue commented 1 year ago

你好,根据提供的log信息来看,

你可以试试重新拉取代码,然后按下面配置跑一下:

augment: true
augment_times: 1
augment_times_query: 1
augment_method: DeepBdcAugment
backbone:
  name: resnet12Bdc
  kwargs:
    reduce_dim: 128
batch_size: 64
classifier:
  name: DeepBDC_Pretrain
  kwargs: 
    reduce_dim: 128
    dropout_rate: 0.5
    val_type: stl
    num_class: 64
    penalty_C: 0.1        # 1-shot(0.1)  5-shot(2.0)
    is_distill: false     # pretrain or distill
    emb_func_path: ~
    classifier_path: ~
data_root: /home/wangyue/miniImageNet--ravi
deterministic: true
n_gpu: 1
device_ids: 1
episode_size: 1
epoch: 170
image_size: 84
includes:
- headers/data.yaml
- headers/device.yaml
- headers/misc.yaml
- headers/model.yaml
- headers/optimizer.yaml
log_interval: 100
log_level: info
log_name: null
log_paramerter: false
lr_scheduler:
  kwargs:
    gamma: 0.1
    milestones:
    - 100
    - 150
  name: MultiStepLR

optimizer:
  kwargs:
    lr: 0.05
    momentum: 0.9
    nesterov: true
    weight_decay: 0.0005
  name: SGD
  other: 
    emb_func.bdc_pool.temperature: 0.001
parallel_part:
- emb_func
- classifier
port: 48828
pretrain_path: ~
rank: 0
result_root: ./results
resume: false
save_interval: 10
save_part:
- emb_func
- classifier
seed: 1
tag: null
tb_scale: 3.3333333333333335
val_per_epoch: 1
# val_per_epoch: 200  # NOTE: just save last model
test_episode: 600   # test/val n_episodes, number of episodes in meta val
test_epoch: 5
test_query: 15
test_shot: 1
test_way: 5
train_episode: 600
query_num: 15
shot_num: 1
use_memory: false
way_num: 5
workers: 12

另外,下面是原论文的stl_deepbdc pretrain的训练日志,你可以参考一下loss的训练情况: pretrain.log

MasterWinston commented 1 year ago

@stu-yue 好的。感谢。我猜测问题应该是我的本地目录没更新导致。另外,我在你的训练日志里https://github.com/RL-VIG/LibFewShot/pull/67中网盘提供的权重文件中,关于stlDeepBDC方法的checkpoints中发现只有model_last文件(关于如何从model_last中提取出model_best等相应内容我暂时还不会),没有emb_func_best、model_best等其他文件,请问你能提供一下吗?非常感谢! 你可以发到我的邮箱(daxiawan0408@qq.com)或者直接放在这里。 再次感谢!!

stu-yue commented 1 year ago

@MasterWinston 可以的,(因为之前原论文的stl_deepbdc是只保存了model_last,故将val_per_epoch设置为很大),后面这两天会尽快更新下reproduce中相应的config(原DeepBDCSTLconfig有个别参数名命名未更新),你可以先用下我上面发的那个config,它可以在训练时保存emc_func_best等内容(主要就是对val_per_epoch参数的设置)。

wZuck commented 1 year ago

现在是有很多用了预训练的方法都是在预训练阶段不测试,只取最终模型的。 这个问题看起来应该解决了,感谢 @stu-yue。如果还有问题欢迎随时re-open这个issue。

MasterWinston commented 1 year ago

@wZuck @stu-yue 我又遇到了此类问题。在蒸馏时,train的第120多epoch时,准确率依旧是60多,在未蒸馏的pretrain阶段已经达到80+,我发现问题可能在于断点再训练,两次训练过程的train_acc保持在60多时都是中途中止过训练并resume_train,请问这个问题如何解决呢。是从yaml文件中进行修改还是有其他方法?感谢,期待回复!

stu-yue commented 1 year ago

@MasterWinston 你好,配置文件已经更新。方法在正常训练情况下应该是没有问题的。你上述提到的断点续训情况,是预训练阶段中断过一次,蒸馏时又中断过一次吗?可以看下train的loss等情况,是否有降低,也可以对照pretrain.logdistil.log的训练趋势是否大致相同来判断下问题所在。如果模型的参数被正常的保存和加载,应该训练趋势也会一样。感谢你的提问!

pretrain.log distil1.log

MasterWinston commented 1 year ago

@stu-yue 。好的。 另外,第一次是在预训练时中断,loss在100epoch后没降。后续我又按照你在上面给的yaml训练,全程没中断,是正常的,最终的train_acc也和你给的log记录基本一致。之后我在正常的预训练权重上进行蒸馏时由于硬件问题训练中断了一次,resume_train之后在第100个epochloss又没有降了。我猜测应该是断点重训的原因(log里的记录在第100epoch时学习率是变了的,但loss没降)。我网上找了找原因,可能需要在重训时在lr_scheduler中加上个last_peoch参数。现在我还没试,还不确定是否是这个原因

stu-yue commented 1 year ago

@stu-yue 。好的。 另外,第一次是在预训练时中断,loss在100epoch后没降。后续我又按照你在上面给的yaml训练,全程没中断,是正常的,最终的train_acc也和你给的log记录基本一致。之后我在正常的预训练权重上进行蒸馏时由于硬件问题训练中断了一次,resume_train之后在第100个epochloss又没有降了。我猜测应该是断点重训的原因(log里的记录在第100epoch时学习率是变了的,但loss没降)。我网上找了找原因,可能需要在重训时在lr_scheduler中加上个last_peoch参数。现在我还没试,还不确定是否是这个原因

请问可以将两次中断的模型及log、以及后续恢复训练的result文件,发送给我吗?(737254958@qq.com)

MasterWinston commented 1 year ago

好的。

ziyuanji commented 9 months ago

请问下大佬,随便一张应用图片如何预测分类,没有支撑集