h751410234 / DATR

A strong DETR-based detector named Domain Adaptive detection TRansformer (DATR) for unsupervised domain adaptation in object detection.
Apache License 2.0
19 stars 1 forks source link

遥感数据 #4

Closed G-200010 closed 3 months ago

G-200010 commented 3 months ago

您好 我想将xView到Dota任务用这个框架实现 请问除了修改配置中的num_classes之外还要进行什么修改呢 修改num_classes之后仍然有访问报错 看起来是类别数量报错

h751410234 commented 3 months ago

base = ['coco_transformer.py']

num_classes= 4

lr = 0.0001 param_dict_type = 'default' lr_backbone = 1e-05 lr_backbone_names = ['backbone.0'] lr_linear_proj_names = ['reference_points', 'sampling_offsets'] lr_linear_proj_mult = 0.1 ddetr_lr_param = False batch_size = 2 weight_decay = 0.0001 epochs = 36 lr_drop = 30 #第XX个,例如 30则为epoch=30时学习率下降,epoch从0开始(0-35) save_checkpoint_interval = 1 clip_max_norm = 0.1 onecyclelr = False multi_step_lr = False lr_drop_list = [33, 45]

modelname = 'dino' frozen_weights = None backbone = 'resnet50' use_checkpoint = True

dilation = False position_embedding = 'sine' pe_temperatureH = 20 pe_temperatureW = 20 return_interm_indices = [1, 2, 3] backbone_freeze_keywords = None enc_layers = 6 dec_layers = 6 unic_layers = 0 pre_norm = False dim_feedforward = 2048 hidden_dim = 256 dropout = 0.0 nheads = 8 num_queries = 900 query_dim = 4 num_patterns = 0 pdetr3_bbox_embed_diff_each_layer = False pdetr3_refHW = -1 random_refpoints_xy = False fix_refpoints_hw = -1 dabdetr_yolo_like_anchor_update = False dabdetr_deformable_encoder = False dabdetr_deformable_decoder = False use_deformable_box_attn = False box_attn_type = 'roi_align' dec_layer_number = None num_feature_levels = 4 enc_n_points = 4 dec_n_points = 4 decoder_layer_noise = False dln_xy_noise = 0.2 dln_hw_noise = 0.2 add_channel_attention = False add_pos_value = False two_stage_type = 'standard' two_stage_pat_embed = 0 two_stage_add_query_num = 0 two_stage_bbox_embed_share = False two_stage_class_embed_share = False two_stage_learn_wh = False two_stage_default_hw = 0.05 two_stage_keep_all_tokens = False num_select = 300 transformer_activation = 'relu' batch_norm_type = 'FrozenBatchNorm2d' masks = False aux_loss = True set_cost_class = 2.0 set_cost_bbox = 5.0 set_cost_giou = 2.0 cls_loss_coef = 1.0 mask_loss_coef = 1.0 dice_loss_coef = 1.0 bbox_loss_coef = 5.0 giou_loss_coef = 2.0 enc_loss_coef = 1.0 interm_loss_coef = 1.0 no_interm_box_loss = False focal_alpha = 0.25

----DA Setting

da_backbone_loss_coef = 0.1 da_proto_loss_coef = 0.1 da_global_proto_coef = 0.1

decoder_sa_type = 'sa' # ['sa', 'ca_label', 'ca_content'] matcher_type = 'HungarianMatcher' # or SimpleMinsumMatcher decoder_module_seq = ['sa', 'ca', 'ffn'] nms_iou_threshold = -1

dec_pred_bbox_embed_share = True dec_pred_class_embed_share = True

for dn

use_dn = True dn_number = 100 dn_box_noise_scale = 0.4 dn_label_noise_ratio = 0.5 embed_init_tgt = True dn_labelbook_size = 4

match_unstable_error = True

for ema

use_ema = False ema_decay = 0.9997 ema_epoch = 0 use_detached_boxes_dec_out = False

for self-training

burn_epochs = 40 #自训练起始epoch strong_aug = True #目标域数据强增广 pseudo_label_threshold = 0.3 #伪标签置信度阈值 ema_decay_teacher = 0.9997 ema_decay_best_model = 0.9 self_training_loss_coef = 1.0 可以参考这个配置

G-200010 commented 3 months ago

好的 谢谢您 我后面将尝试您的配置

h751410234 commented 3 months ago

ok,这个是第一阶段的配置,第二阶段(自训练阶段)照着修改即可