Open aries-young opened 2 years ago
I have the same issue. The reproduced performance on the test set is very poor.
[Epoch] 200 [Loss] loss_label 0.0943 class_error 0.0000 loss_span 0.0331 loss_giou 0.5693 loss_label_0 0.0979 class_error_0 0.0000 loss_span_0 0.0354 loss_giou_0 0.5991 loss_label_1 0.0954 class_error_1 0.0000 loss_span_1 0.0334 loss_giou_1 0.5739 loss_label_2 0.0950 class_error_2 0.0000 loss_span_2 0.0335 loss_giou_2 0.5748 loss_overall 2.8351 [Metrics_No_NMS] OrderedDict([ ('VG-full-R1@0.1', 50.64), ('VG-full-R1@0.3', 35.1), ('VG-full-R1@0.5', 19.71), ('VG-full-R1@0.7', 6.27), ('VG-full-R5@0.1', 90.42), ('VG-full-R5@0.3', 81.11), ('VG-full-R5@0.5', 66.65), ('VG-full-R5@0.7', 31.87), ('VG-full-mAP', 32.6), ('VG-full-mIoU@R1', 0.2262), ('VG-full-mIoU@R5', 0.5488)])
Please refer to the following configurations.
aux_loss=True
backbone=clip
bs=16
data_type=features
dec_layers=4
dim_feedforward=1024
dropout=0.1
early_stop_patienc=10
enc_layers=4
eos_coef=0.1
eval_bs=16
eval_untrainedFalse
hidden_dim256
input_dropout0.5
lr=0.0001
lr_drop_step=20
method=joint
n_input_proj=2
nheads=8
norm_tfeat=True
norm_vfeat=True
num_input_frames=64
num_input_sentences=4
num_queries=40
optimizer=adamw
pre_norm=False
pred_label=cos
scheduler=steplr
seed=1
set_cost_class=1
set_cost_giou=2
set_cost_span=1
span_type=cw
txt_drop_ratio=0
txt_feat_dim=512
txt_position_embedding=sine
use_txt_pos=True
vid_feat_dim=512
vid_position_embedding=sine
wd=0.0001
@sangminwoo Thanks for your help! We tried your new configurations but still cannot reproduce the performance shown in the paper.
[Epoch] 200
[Loss]
> loss_label 0.0842
> class_error 0.0000
> loss_span 0.0328
> loss_giou 0.3810
> loss_label_0 0.1045
> class_error_0 0.0250
> loss_span_0 0.0373
> loss_giou_0 0.4137
> loss_label_1 0.0890
> class_error_1 0.0000
> loss_span_1 0.0339
> loss_giou_1 0.3922
> loss_label_2 0.0835
> class_error_2 0.0000
> loss_span_2 0.0328
> loss_giou_2 0.3828
> loss_overall 2.0678
[Metrics_No_NMS]
OrderedDict([ ('VG-full-R1@0.1', 46.63),
('VG-full-R1@0.3', 31.36),
('VG-full-R1@0.5', 19.32),
('VG-full-R1@0.7', 6.19),
('VG-full-R5@0.1', 82.59),
('VG-full-R5@0.3', 74.99),
('VG-full-R5@0.5', 62.8),
('VG-full-R5@0.7', 36.0),
('VG-full-mAP', 33.36),
('VG-full-mIoU@R1', 0.2087),
('VG-full-mIoU@R5', 0.5248)])
And we find you give a new parameter name 'set_cost_class' which is not in the public code( there is a parameter named 'set_cost_query' instead). Here is the configurations we used in training:
| results_dir | results |
| device | 0 |
| seed | 1 |
| log_interval | 1 |
| val_interval | 5 |
| save_interval | 50 |
| use_gpu | True |
| debug | False |
| eval_untrained | False |
| log_dir | logs |
| resume | |
| resume_all | False |
| att_visualize | False |
| corr_visualize | False |
| dist_visualize | False |
| start_epoch | |
| end_epoch | 200 |
| early_stop_patience | -1 |
| lr | 0.0001 |
| lr_drop_step | 20 |
| wd | 0.0001 |
| optimizer | adamw |
| scheduler | steplr |
| dataset | charades |
| data_type | features |
| num_input_frames | 64 |
| num_input_sentences | 4 |
| bs | 16 |
| eval_bs | 1 |
| num_workers | 16 |
| pin_memory | True |
| checkpoint | ./save |
| norm_vfeat | True |
| norm_tfeat | True |
| txt_drop_ratio | 0 |
| backbone | clip |
| method | joint |
| hidden_dim | 256 |
| nheads | 8 |
| enc_layers | 4 |
| dec_layers | 4 |
| vid_feat_dim | 512 |
| txt_feat_dim | 512 |
| num_proposals | 40 |
| input_dropout | 0.5 |
| use_vid_pos | True |
| use_txt_pos | True |
| n_input_proj | 2 |
| dropout | 0.1 |
| dim_feedforward | 1024 |
| pre_norm | False |
| vid_position_embedding | sine |
| txt_position_embedding | sine |
| set_cost_span | 1 |
| set_cost_giou | 2 |
| set_cost_query | 1 |
| aux_loss | True |
| eos_coef | 0.1 |
| pred_label | cos |
| span_type | cw |
| no_sort_results | False |
| max_before_nms | 10 |
| max_after_nms | 10 |
| conf_thd | 0.0 |
| nms_thd | -1 |
c
hello ,how long did your training take? How many GPUs did you use?
@sangminwoo I have the same issue. Can you give me some advice to solve this problem? Thank you very much.
[Epoch] 200
[Loss]
> loss_label 0.6117
> class_error 6.6606
> loss_span 0.0517
> loss_giou 0.7141
> loss_label_0 0.6115
> class_error_0 4.8437
> loss_span_0 0.0525
> loss_giou_0 0.7238
> loss_label_1 0.6097
> class_error_1 5.1319
> loss_span_1 0.0523
> loss_giou_1 0.7173
> loss_label_2 0.6095
> class_error_2 7.1385
> loss_span_2 0.0521
> loss_giou_2 0.7161
> loss_overall 5.5224
[Metrics_No_NMS]
OrderedDict([ ('VG-full-R1@0.1', 72.64), ('VG-full-R1@0.3', 39.87), ('VG-full-R1@0.5', 15.49), ('VG-full-R1@0.7', 6.77), ('VG-full-R5@0.1', 89.32), ('VG-full-R5@0.3', 80.04), ('VG-full-R5@0.5', 59.88), ('VG-full-R5@0.7', 34.18), ('VG-full-mIoU@R1', 0.2702), ('VG-full-mIoU@R5', 0.5454), ('VG-long-R1@0.5', 0.16), ('VG-long-R5@0.5', 31.79), ('VG-middle-R1@0.5', 11.88), ('VG-middle-R5@0.5', 73.29), ('VG-short-R1@0.5', 21.51), ('VG-short-R5@0.5', 57.77)])
Hi! I am also facing the same issue. Can anyone tell me what are your hardware specs?
I trained the model on one V100 GPU (took me about 1h to train for the 200 epochs) and got the following best performance (epoch 65): "VG-full-R1@0.1": 49.59, "VG-full-R1@0.3": 32.92, "VG-full-R1@0.5": 18.63, "VG-full-R1@0.7": 6.91, "VG-full-R5@0.1": 88.91, "VG-full-R5@0.3": 79.27, "VG-full-R5@0.5": 62.75, "VG-full-R5@0.7": 33.54, "VG-full-mAP": 32.82, "VG-full-mIoU@R1": 0.2193, "VG-full-mIoU@R5": 0.54
Hello, the author! I followed all your feature extraction and charades training suggestions in your github homepage. But in my work environment, the reprocude results of LVTR-CLIP in 200-th epoch looked like this
To record the problem, I used tensorboard to collect some evaluation information of each epoch. I followed all your code without any modification except removing the evaluation record of "long" length_range. So, would you kindly give us some advice to solve this problem to successfully reproduce your work.? Thank you very much!