MIV-XJTU / ARTrack

Apache License 2.0
228 stars 33 forks source link

第二阶段训练时,在loss做反向传播的时候,会有inplace错误 #65

Closed bonbonx01 closed 5 months ago

bonbonx01 commented 5 months ago

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [802, 768]] is at version 152; expected version 151 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! 我没有修改过代码,请问可能是哪里的问题?

以下是控制台输出内容:

script_name: artrack_seq.py config_name: artrack_seq_256_mygot10k.yaml New configuration is shown below. MODEL configuration: {'PRETRAIN_FILE': 'mae_pretrain_vit_base.pth', 'PRETRAIN_PTH': '/data1/txuan/code/artrack/output/checkpoints/train/artrack/artrack_256_mygot/ARTrack_ep0120.pth.tar', 'PRENUM': 7, 'EXTRA_MERGER': False, 'RETURN_INTER': False, 'RETURN_STAGES': [2, 5, 8, 11], 'BACKBONE': {'TYPE': 'vit_base_patch16_224', 'STRIDE': 16, 'MID_PE': False, 'SEP_SEG': False, 'CAT_MODE': 'direct', 'MERGE_LAYER': 0, 'ADD_CLS_TOKEN': False, 'CLS_TOKEN_USE_MODE': 'ignore', 'CE_LOC': [], 'CE_KEEP_RATIO': [], 'CE_TEMPLATE_RANGE': 'ALL'}, 'BINS': 400, 'RANGE': 2, 'ENCODER_LAYER': 3, 'NUM_HEADS': 12, 'MLP_RATIO': 4, 'QKV_BIAS': True, 'DROP_RATE': 0.1, 'ATTN_DROP': 0.0, 'DROP_PATH': 0.0, 'DECODER_LAYER': 6, 'HEAD': {'TYPE': 'PIX', 'NUM_CHANNELS': 768}}

TRAIN configuration: {'LR': 4e-06, 'WEIGHT_DECAY': 0.05, 'EPOCH': 30, 'LR_DROP_EPOCH': 999, 'BATCH_SIZE': 8, 'NUM_WORKER': 4, 'OPTIMIZER': 'ADAMW', 'BACKBONE_MULTIPLIER': 0.1, 'GIOU_WEIGHT': 2.0, 'L1_WEIGHT': 0.0, 'FREEZE_LAYERS': [0], 'PRINT_INTERVAL': 1, 'VAL_EPOCH_INTERVAL': 10, 'GRAD_CLIP_NORM': 0.1, 'AMP': False, 'CE_START_EPOCH': 20, 'CE_WARM_EPOCH': 80, 'DROP_PATH_RATE': 0.1, 'SCHEDULER': {'TYPE': 'step', 'DECAY_RATE': 0.1}}

DATA configuration: {'SAMPLER_MODE': 'causal', 'MEAN': [0.485, 0.456, 0.406], 'STD': [0.229, 0.224, 0.225], 'MAX_SAMPLE_INTERVAL': 200, 'MAX_GAP': 300, 'MAX_INTERVAL': 5, 'INTERVAL_PROB': 0.0, 'TEMP': 2, 'TRAIN': {'DATASETS_NAME': ['GOT10K_train_full'], 'DATASETS_RATIO': [1], 'SAMPLE_PER_EPOCH': 1000}, 'VAL': {'DATASETS_NAME': ['GOT10K_official_val'], 'DATASETS_RATIO': [1], 'SAMPLE_PER_EPOCH': 10000}, 'SEARCH': {'SIZE': 256, 'FACTOR': 4.0, 'CENTER_JITTER': 3, 'SCALE_JITTER': 0.25, 'NUMBER': 36}, 'TEMPLATE': {'NUMBER': 1, 'SIZE': 128, 'FACTOR': 2.0, 'CENTER_JITTER': 0, 'SCALE_JITTER': 0}}

TEST configuration: {'TEMPLATE_FACTOR': 2.0, 'TEMPLATE_SIZE': 128, 'SEARCH_FACTOR': 4.0, 'SEARCH_SIZE': 256, 'EPOCH': 30}

artrack_seq Load pretrained model from: /data1/txuan/code/artrack/lib/models/artrack_seq/../../../pretrained_models/mae_pretrain_vit_base.pth 4 4 4 Load pretrained model from: /data1/txuan/code/artrack/output/checkpoints/train/artrack/artrack_256_mygot/ARTrack_ep0120.pth.tar settings.local_rank = -1 device_idx = 0 Learnable parameters are shown below. identity backbone.cls_token backbone.pos_embed backbone.pos_embed_z backbone.pos_embed_x backbone.patch_embed.proj.weight backbone.patch_embed.proj.bias backbone.blocks.0.norm1.weight backbone.blocks.0.norm1.bias backbone.blocks.0.attn.qkv.weight backbone.blocks.0.attn.qkv.bias backbone.blocks.0.attn.proj.weight backbone.blocks.0.attn.proj.bias backbone.blocks.0.norm2.weight backbone.blocks.0.norm2.bias backbone.blocks.0.mlp.fc1.weight backbone.blocks.0.mlp.fc1.bias backbone.blocks.0.mlp.fc2.weight backbone.blocks.0.mlp.fc2.bias backbone.blocks.1.norm1.weight backbone.blocks.1.norm1.bias backbone.blocks.1.attn.qkv.weight backbone.blocks.1.attn.qkv.bias backbone.blocks.1.attn.proj.weight backbone.blocks.1.attn.proj.bias backbone.blocks.1.norm2.weight backbone.blocks.1.norm2.bias backbone.blocks.1.mlp.fc1.weight backbone.blocks.1.mlp.fc1.bias backbone.blocks.1.mlp.fc2.weight backbone.blocks.1.mlp.fc2.bias backbone.blocks.2.norm1.weight backbone.blocks.2.norm1.bias backbone.blocks.2.attn.qkv.weight backbone.blocks.2.attn.qkv.bias backbone.blocks.2.attn.proj.weight backbone.blocks.2.attn.proj.bias backbone.blocks.2.norm2.weight backbone.blocks.2.norm2.bias backbone.blocks.2.mlp.fc1.weight backbone.blocks.2.mlp.fc1.bias backbone.blocks.2.mlp.fc2.weight backbone.blocks.2.mlp.fc2.bias backbone.blocks.3.norm1.weight backbone.blocks.3.norm1.bias backbone.blocks.3.attn.qkv.weight backbone.blocks.3.attn.qkv.bias backbone.blocks.3.attn.proj.weight backbone.blocks.3.attn.proj.bias backbone.blocks.3.norm2.weight backbone.blocks.3.norm2.bias backbone.blocks.3.mlp.fc1.weight backbone.blocks.3.mlp.fc1.bias backbone.blocks.3.mlp.fc2.weight backbone.blocks.3.mlp.fc2.bias backbone.blocks.4.norm1.weight backbone.blocks.4.norm1.bias backbone.blocks.4.attn.qkv.weight backbone.blocks.4.attn.qkv.bias backbone.blocks.4.attn.proj.weight backbone.blocks.4.attn.proj.bias backbone.blocks.4.norm2.weight backbone.blocks.4.norm2.bias backbone.blocks.4.mlp.fc1.weight backbone.blocks.4.mlp.fc1.bias backbone.blocks.4.mlp.fc2.weight backbone.blocks.4.mlp.fc2.bias backbone.blocks.5.norm1.weight backbone.blocks.5.norm1.bias backbone.blocks.5.attn.qkv.weight backbone.blocks.5.attn.qkv.bias backbone.blocks.5.attn.proj.weight backbone.blocks.5.attn.proj.bias backbone.blocks.5.norm2.weight backbone.blocks.5.norm2.bias backbone.blocks.5.mlp.fc1.weight backbone.blocks.5.mlp.fc1.bias backbone.blocks.5.mlp.fc2.weight backbone.blocks.5.mlp.fc2.bias backbone.blocks.6.norm1.weight backbone.blocks.6.norm1.bias backbone.blocks.6.attn.qkv.weight backbone.blocks.6.attn.qkv.bias backbone.blocks.6.attn.proj.weight backbone.blocks.6.attn.proj.bias backbone.blocks.6.norm2.weight backbone.blocks.6.norm2.bias backbone.blocks.6.mlp.fc1.weight backbone.blocks.6.mlp.fc1.bias backbone.blocks.6.mlp.fc2.weight backbone.blocks.6.mlp.fc2.bias backbone.blocks.7.norm1.weight backbone.blocks.7.norm1.bias backbone.blocks.7.attn.qkv.weight backbone.blocks.7.attn.qkv.bias backbone.blocks.7.attn.proj.weight backbone.blocks.7.attn.proj.bias backbone.blocks.7.norm2.weight backbone.blocks.7.norm2.bias backbone.blocks.7.mlp.fc1.weight backbone.blocks.7.mlp.fc1.bias backbone.blocks.7.mlp.fc2.weight backbone.blocks.7.mlp.fc2.bias backbone.blocks.8.norm1.weight backbone.blocks.8.norm1.bias backbone.blocks.8.attn.qkv.weight backbone.blocks.8.attn.qkv.bias backbone.blocks.8.attn.proj.weight backbone.blocks.8.attn.proj.bias backbone.blocks.8.norm2.weight backbone.blocks.8.norm2.bias backbone.blocks.8.mlp.fc1.weight backbone.blocks.8.mlp.fc1.bias backbone.blocks.8.mlp.fc2.weight backbone.blocks.8.mlp.fc2.bias backbone.blocks.9.norm1.weight backbone.blocks.9.norm1.bias backbone.blocks.9.attn.qkv.weight backbone.blocks.9.attn.qkv.bias backbone.blocks.9.attn.proj.weight backbone.blocks.9.attn.proj.bias backbone.blocks.9.norm2.weight backbone.blocks.9.norm2.bias backbone.blocks.9.mlp.fc1.weight backbone.blocks.9.mlp.fc1.bias backbone.blocks.9.mlp.fc2.weight backbone.blocks.9.mlp.fc2.bias backbone.blocks.10.norm1.weight backbone.blocks.10.norm1.bias backbone.blocks.10.attn.qkv.weight backbone.blocks.10.attn.qkv.bias backbone.blocks.10.attn.proj.weight backbone.blocks.10.attn.proj.bias backbone.blocks.10.norm2.weight backbone.blocks.10.norm2.bias backbone.blocks.10.mlp.fc1.weight backbone.blocks.10.mlp.fc1.bias backbone.blocks.10.mlp.fc2.weight backbone.blocks.10.mlp.fc2.bias backbone.blocks.11.norm1.weight backbone.blocks.11.norm1.bias backbone.blocks.11.attn.qkv.weight backbone.blocks.11.attn.qkv.bias backbone.blocks.11.attn.proj.weight backbone.blocks.11.attn.proj.bias backbone.blocks.11.norm2.weight backbone.blocks.11.norm2.bias backbone.blocks.11.mlp.fc1.weight backbone.blocks.11.mlp.fc1.bias backbone.blocks.11.mlp.fc2.weight backbone.blocks.11.mlp.fc2.bias backbone.norm.weight backbone.norm.bias pix_head.output_bias pix_head.identity_search pix_head.word_embeddings.weight pix_head.position_embeddings.weight pix_head.prev_position_embeddings.weight pix_head.encoder.layers.0.z_norm1.weight pix_head.encoder.layers.0.z_norm1.bias pix_head.encoder.layers.0.x_norm1.weight pix_head.encoder.layers.0.x_norm1.bias pix_head.encoder.layers.0.z_self_attn.qkv.weight pix_head.encoder.layers.0.z_self_attn.qkv.bias pix_head.encoder.layers.0.z_self_attn.proj.weight pix_head.encoder.layers.0.z_self_attn.proj.bias pix_head.encoder.layers.0.x_self_attn.qkv.weight pix_head.encoder.layers.0.x_self_attn.qkv.bias pix_head.encoder.layers.0.x_self_attn.proj.weight pix_head.encoder.layers.0.x_self_attn.proj.bias pix_head.encoder.layers.0.z_norm2_1.weight pix_head.encoder.layers.0.z_norm2_1.bias pix_head.encoder.layers.0.z_norm2_2.weight pix_head.encoder.layers.0.z_norm2_2.bias pix_head.encoder.layers.0.x_norm2_1.weight pix_head.encoder.layers.0.x_norm2_1.bias pix_head.encoder.layers.0.x_norm2_2.weight pix_head.encoder.layers.0.x_norm2_2.bias pix_head.encoder.layers.0.z_x_cross_attention.q.weight pix_head.encoder.layers.0.z_x_cross_attention.q.bias pix_head.encoder.layers.0.z_x_cross_attention.kv.weight pix_head.encoder.layers.0.z_x_cross_attention.kv.bias pix_head.encoder.layers.0.z_x_cross_attention.proj.weight pix_head.encoder.layers.0.z_x_cross_attention.proj.bias pix_head.encoder.layers.0.x_z_cross_attention.q.weight pix_head.encoder.layers.0.x_z_cross_attention.q.bias pix_head.encoder.layers.0.x_z_cross_attention.kv.weight pix_head.encoder.layers.0.x_z_cross_attention.kv.bias pix_head.encoder.layers.0.x_z_cross_attention.proj.weight pix_head.encoder.layers.0.x_z_cross_attention.proj.bias pix_head.encoder.layers.0.z_norm3.weight pix_head.encoder.layers.0.z_norm3.bias pix_head.encoder.layers.0.x_norm3.weight pix_head.encoder.layers.0.x_norm3.bias pix_head.encoder.layers.0.z_mlp.fc1.weight pix_head.encoder.layers.0.z_mlp.fc1.bias pix_head.encoder.layers.0.z_mlp.fc2.weight pix_head.encoder.layers.0.z_mlp.fc2.bias pix_head.encoder.layers.0.x_mlp.fc1.weight pix_head.encoder.layers.0.x_mlp.fc1.bias pix_head.encoder.layers.0.x_mlp.fc2.weight pix_head.encoder.layers.0.x_mlp.fc2.bias pix_head.encoder.layers.1.z_norm1.weight pix_head.encoder.layers.1.z_norm1.bias pix_head.encoder.layers.1.x_norm1.weight pix_head.encoder.layers.1.x_norm1.bias pix_head.encoder.layers.1.z_self_attn.qkv.weight pix_head.encoder.layers.1.z_self_attn.qkv.bias pix_head.encoder.layers.1.z_self_attn.proj.weight pix_head.encoder.layers.1.z_self_attn.proj.bias pix_head.encoder.layers.1.x_self_attn.qkv.weight pix_head.encoder.layers.1.x_self_attn.qkv.bias pix_head.encoder.layers.1.x_self_attn.proj.weight pix_head.encoder.layers.1.x_self_attn.proj.bias pix_head.encoder.layers.1.z_norm2_1.weight pix_head.encoder.layers.1.z_norm2_1.bias pix_head.encoder.layers.1.z_norm2_2.weight pix_head.encoder.layers.1.z_norm2_2.bias pix_head.encoder.layers.1.x_norm2_1.weight pix_head.encoder.layers.1.x_norm2_1.bias pix_head.encoder.layers.1.x_norm2_2.weight pix_head.encoder.layers.1.x_norm2_2.bias pix_head.encoder.layers.1.z_x_cross_attention.q.weight pix_head.encoder.layers.1.z_x_cross_attention.q.bias pix_head.encoder.layers.1.z_x_cross_attention.kv.weight pix_head.encoder.layers.1.z_x_cross_attention.kv.bias pix_head.encoder.layers.1.z_x_cross_attention.proj.weight pix_head.encoder.layers.1.z_x_cross_attention.proj.bias pix_head.encoder.layers.1.x_z_cross_attention.q.weight pix_head.encoder.layers.1.x_z_cross_attention.q.bias pix_head.encoder.layers.1.x_z_cross_attention.kv.weight pix_head.encoder.layers.1.x_z_cross_attention.kv.bias pix_head.encoder.layers.1.x_z_cross_attention.proj.weight pix_head.encoder.layers.1.x_z_cross_attention.proj.bias pix_head.encoder.layers.1.z_norm3.weight pix_head.encoder.layers.1.z_norm3.bias pix_head.encoder.layers.1.x_norm3.weight pix_head.encoder.layers.1.x_norm3.bias pix_head.encoder.layers.1.z_mlp.fc1.weight pix_head.encoder.layers.1.z_mlp.fc1.bias pix_head.encoder.layers.1.z_mlp.fc2.weight pix_head.encoder.layers.1.z_mlp.fc2.bias pix_head.encoder.layers.1.x_mlp.fc1.weight pix_head.encoder.layers.1.x_mlp.fc1.bias pix_head.encoder.layers.1.x_mlp.fc2.weight pix_head.encoder.layers.1.x_mlp.fc2.bias pix_head.encoder.layers.2.z_norm1.weight pix_head.encoder.layers.2.z_norm1.bias pix_head.encoder.layers.2.x_norm1.weight pix_head.encoder.layers.2.x_norm1.bias pix_head.encoder.layers.2.z_self_attn.qkv.weight pix_head.encoder.layers.2.z_self_attn.qkv.bias pix_head.encoder.layers.2.z_self_attn.proj.weight pix_head.encoder.layers.2.z_self_attn.proj.bias pix_head.encoder.layers.2.x_self_attn.qkv.weight pix_head.encoder.layers.2.x_self_attn.qkv.bias pix_head.encoder.layers.2.x_self_attn.proj.weight pix_head.encoder.layers.2.x_self_attn.proj.bias pix_head.encoder.layers.2.z_norm2_1.weight pix_head.encoder.layers.2.z_norm2_1.bias pix_head.encoder.layers.2.z_norm2_2.weight pix_head.encoder.layers.2.z_norm2_2.bias pix_head.encoder.layers.2.x_norm2_1.weight pix_head.encoder.layers.2.x_norm2_1.bias pix_head.encoder.layers.2.x_norm2_2.weight pix_head.encoder.layers.2.x_norm2_2.bias pix_head.encoder.layers.2.z_x_cross_attention.q.weight pix_head.encoder.layers.2.z_x_cross_attention.q.bias pix_head.encoder.layers.2.z_x_cross_attention.kv.weight pix_head.encoder.layers.2.z_x_cross_attention.kv.bias pix_head.encoder.layers.2.z_x_cross_attention.proj.weight pix_head.encoder.layers.2.z_x_cross_attention.proj.bias pix_head.encoder.layers.2.x_z_cross_attention.q.weight pix_head.encoder.layers.2.x_z_cross_attention.q.bias pix_head.encoder.layers.2.x_z_cross_attention.kv.weight pix_head.encoder.layers.2.x_z_cross_attention.kv.bias pix_head.encoder.layers.2.x_z_cross_attention.proj.weight pix_head.encoder.layers.2.x_z_cross_attention.proj.bias pix_head.encoder.layers.2.z_norm3.weight pix_head.encoder.layers.2.z_norm3.bias pix_head.encoder.layers.2.x_norm3.weight pix_head.encoder.layers.2.x_norm3.bias pix_head.encoder.layers.2.z_mlp.fc1.weight pix_head.encoder.layers.2.z_mlp.fc1.bias pix_head.encoder.layers.2.z_mlp.fc2.weight pix_head.encoder.layers.2.z_mlp.fc2.bias pix_head.encoder.layers.2.x_mlp.fc1.weight pix_head.encoder.layers.2.x_mlp.fc1.bias pix_head.encoder.layers.2.x_mlp.fc2.weight pix_head.encoder.layers.2.x_mlp.fc2.bias pix_head.encoder.z_pos_enc.pos.w_pos pix_head.encoder.z_pos_enc.pos.h_pos pix_head.encoder.z_pos_enc.norm.weight pix_head.encoder.z_pos_enc.norm.bias pix_head.encoder.z_pos_enc.pos_q_linear.weight pix_head.encoder.z_pos_enc.pos_q_linear.bias pix_head.encoder.z_pos_enc.pos_k_linear.weight pix_head.encoder.z_pos_enc.pos_k_linear.bias pix_head.encoder.x_pos_enc.pos.w_pos pix_head.encoder.x_pos_enc.pos.h_pos pix_head.encoder.x_pos_enc.norm.weight pix_head.encoder.x_pos_enc.norm.bias pix_head.encoder.x_pos_enc.pos_q_linear.weight pix_head.encoder.x_pos_enc.pos_q_linear.bias pix_head.encoder.x_pos_enc.pos_k_linear.weight pix_head.encoder.x_pos_enc.pos_k_linear.bias pix_head.encoder.z_rel_pos_bias_table.relative_position_bias_table pix_head.encoder.x_rel_pos_bias_table.relative_position_bias_table pix_head.encoder.z_x_rel_pos_bias_table.relative_position_bias_table pix_head.encoder.x_z_rel_pos_bias_table.relative_position_bias_table pix_head.decoder.layers.0.norm_1.weight pix_head.decoder.layers.0.norm_1.bias pix_head.decoder.layers.0.self_attn1.in_proj_weight pix_head.decoder.layers.0.self_attn1.in_proj_bias pix_head.decoder.layers.0.self_attn1.out_proj.weight pix_head.decoder.layers.0.self_attn1.out_proj.bias pix_head.decoder.layers.0.norm_2_query.weight pix_head.decoder.layers.0.norm_2_query.bias pix_head.decoder.layers.0.norm_2_memory.weight pix_head.decoder.layers.0.norm_2_memory.bias pix_head.decoder.layers.0.multihead_attn.in_proj_weight pix_head.decoder.layers.0.multihead_attn.in_proj_bias pix_head.decoder.layers.0.multihead_attn.out_proj.weight pix_head.decoder.layers.0.multihead_attn.out_proj.bias pix_head.decoder.layers.0.norm_3.weight pix_head.decoder.layers.0.norm_3.bias pix_head.decoder.layers.0.mlpz.fc1.weight pix_head.decoder.layers.0.mlpz.fc1.bias pix_head.decoder.layers.0.mlpz.fc2.weight pix_head.decoder.layers.0.mlpz.fc2.bias pix_head.decoder.layers.1.norm_1.weight pix_head.decoder.layers.1.norm_1.bias pix_head.decoder.layers.1.self_attn1.in_proj_weight pix_head.decoder.layers.1.self_attn1.in_proj_bias pix_head.decoder.layers.1.self_attn1.out_proj.weight pix_head.decoder.layers.1.self_attn1.out_proj.bias pix_head.decoder.layers.1.norm_2_query.weight pix_head.decoder.layers.1.norm_2_query.bias pix_head.decoder.layers.1.norm_2_memory.weight pix_head.decoder.layers.1.norm_2_memory.bias pix_head.decoder.layers.1.multihead_attn.in_proj_weight pix_head.decoder.layers.1.multihead_attn.in_proj_bias pix_head.decoder.layers.1.multihead_attn.out_proj.weight pix_head.decoder.layers.1.multihead_attn.out_proj.bias pix_head.decoder.layers.1.norm_3.weight pix_head.decoder.layers.1.norm_3.bias pix_head.decoder.layers.1.mlpz.fc1.weight pix_head.decoder.layers.1.mlpz.fc1.bias pix_head.decoder.layers.1.mlpz.fc2.weight pix_head.decoder.layers.1.mlpz.fc2.bias pix_head.decoder.layers.2.norm_1.weight pix_head.decoder.layers.2.norm_1.bias pix_head.decoder.layers.2.self_attn1.in_proj_weight pix_head.decoder.layers.2.self_attn1.in_proj_bias pix_head.decoder.layers.2.self_attn1.out_proj.weight pix_head.decoder.layers.2.self_attn1.out_proj.bias pix_head.decoder.layers.2.norm_2_query.weight pix_head.decoder.layers.2.norm_2_query.bias pix_head.decoder.layers.2.norm_2_memory.weight pix_head.decoder.layers.2.norm_2_memory.bias pix_head.decoder.layers.2.multihead_attn.in_proj_weight pix_head.decoder.layers.2.multihead_attn.in_proj_bias pix_head.decoder.layers.2.multihead_attn.out_proj.weight pix_head.decoder.layers.2.multihead_attn.out_proj.bias pix_head.decoder.layers.2.norm_3.weight pix_head.decoder.layers.2.norm_3.bias pix_head.decoder.layers.2.mlpz.fc1.weight pix_head.decoder.layers.2.mlpz.fc1.bias pix_head.decoder.layers.2.mlpz.fc2.weight pix_head.decoder.layers.2.mlpz.fc2.bias pix_head.decoder.layers.3.norm_1.weight pix_head.decoder.layers.3.norm_1.bias pix_head.decoder.layers.3.self_attn1.in_proj_weight pix_head.decoder.layers.3.self_attn1.in_proj_bias pix_head.decoder.layers.3.self_attn1.out_proj.weight pix_head.decoder.layers.3.self_attn1.out_proj.bias pix_head.decoder.layers.3.norm_2_query.weight pix_head.decoder.layers.3.norm_2_query.bias pix_head.decoder.layers.3.norm_2_memory.weight pix_head.decoder.layers.3.norm_2_memory.bias pix_head.decoder.layers.3.multihead_attn.in_proj_weight pix_head.decoder.layers.3.multihead_attn.in_proj_bias pix_head.decoder.layers.3.multihead_attn.out_proj.weight pix_head.decoder.layers.3.multihead_attn.out_proj.bias pix_head.decoder.layers.3.norm_3.weight pix_head.decoder.layers.3.norm_3.bias pix_head.decoder.layers.3.mlpz.fc1.weight pix_head.decoder.layers.3.mlpz.fc1.bias pix_head.decoder.layers.3.mlpz.fc2.weight pix_head.decoder.layers.3.mlpz.fc2.bias pix_head.decoder.layers.4.norm_1.weight pix_head.decoder.layers.4.norm_1.bias pix_head.decoder.layers.4.self_attn1.in_proj_weight pix_head.decoder.layers.4.self_attn1.in_proj_bias pix_head.decoder.layers.4.self_attn1.out_proj.weight pix_head.decoder.layers.4.self_attn1.out_proj.bias pix_head.decoder.layers.4.norm_2_query.weight pix_head.decoder.layers.4.norm_2_query.bias pix_head.decoder.layers.4.norm_2_memory.weight pix_head.decoder.layers.4.norm_2_memory.bias pix_head.decoder.layers.4.multihead_attn.in_proj_weight pix_head.decoder.layers.4.multihead_attn.in_proj_bias pix_head.decoder.layers.4.multihead_attn.out_proj.weight pix_head.decoder.layers.4.multihead_attn.out_proj.bias pix_head.decoder.layers.4.norm_3.weight pix_head.decoder.layers.4.norm_3.bias pix_head.decoder.layers.4.mlpz.fc1.weight pix_head.decoder.layers.4.mlpz.fc1.bias pix_head.decoder.layers.4.mlpz.fc2.weight pix_head.decoder.layers.4.mlpz.fc2.bias pix_head.decoder.layers.5.norm_1.weight pix_head.decoder.layers.5.norm_1.bias pix_head.decoder.layers.5.self_attn1.in_proj_weight pix_head.decoder.layers.5.self_attn1.in_proj_bias pix_head.decoder.layers.5.self_attn1.out_proj.weight pix_head.decoder.layers.5.self_attn1.out_proj.bias pix_head.decoder.layers.5.norm_2_query.weight pix_head.decoder.layers.5.norm_2_query.bias pix_head.decoder.layers.5.norm_2_memory.weight pix_head.decoder.layers.5.norm_2_memory.bias pix_head.decoder.layers.5.multihead_attn.in_proj_weight pix_head.decoder.layers.5.multihead_attn.in_proj_bias pix_head.decoder.layers.5.multihead_attn.out_proj.weight pix_head.decoder.layers.5.multihead_attn.out_proj.bias pix_head.decoder.layers.5.norm_3.weight pix_head.decoder.layers.5.norm_3.bias pix_head.decoder.layers.5.mlpz.fc1.weight pix_head.decoder.layers.5.mlpz.fc1.bias pix_head.decoder.layers.5.mlpz.fc2.weight pix_head.decoder.layers.5.mlpz.fc2.bias pix_head.decoder.norm.weight pix_head.decoder.norm.bias checkpoints will be saved to /data1/txuan/code/artrack/output/checkpoints move_data True No matching checkpoint file found Training crashed at epoch 1 Traceback for the error! Traceback (most recent call last): File "/data1/txuan/code/artrack/lib/train/../../lib/train/trainers/base_trainer.py", line 85, in train self.train_epoch() File "/data1/txuan/code/artrack/lib/train/../../lib/train/trainers/ltr_seq_trainer.py", line 227, in train_epoch self.cycle_dataset(loader) File "/data1/txuan/code/artrack/lib/train/../../lib/train/trainers/ltr_seq_trainer.py", line 153, in cycle_dataset loss.backward(retain_graph=True) File "/home/n702/anaconda3/envs/artrack/lib/python3.8/site-packages/torch/_tensor.py", line 488, in backward torch.autograd.backward( File "/home/n702/anaconda3/envs/artrack/lib/python3.8/site-packages/torch/autograd/init.py", line 197, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [802, 768]] is at version 152; expected version 151 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

Restarting training from last epoch ... Finished training!

AlexDotHam commented 5 months ago

你可以参考一下这个issue: https://github.com/MIV-XJTU/ARTrack/issues/60