Open tomgotjack opened 5 months ago
你好,同问,蹲一个回答:D. 想要实现在保持开集的情况下微调,增加我自己的categories.
@mandyxiaomeng 你好,我这边代码可以跑了。 我使用的是configs/pretrain/yolo_world_v2_l_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_val.py文件,做了一点小改动,代码如下:
base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py') custom_imports = dict(imports=['yolo_world'], allow_failed_imports=False)
num_classes = 80 num_training_classes = 80 max_epochs = 30 # Maximum training epochs close_mosaic_epochs = 30 save_epoch_intervals = 2 text_channels = 512 neck_embed_channels = [128, 256, base.last_stage_out_channels // 2] neck_num_heads = [4, 8, base.last_stage_out_channels // 2 // 32] base_lr = 1e-3 weight_decay = 0.0005 train_batch_size_per_gpu = 24 load_from = 'weights/yolo_world_v2_l_obj365v1_goldg_cc3mlite_pretrain-ca93cd1f.pth'
text_model_name = 'openai/clip-vit-base-patch32' persistent_workers = False
model = dict( type='YOLOWorldDetector', mm_neck=True, num_train_classes=num_training_classes, num_test_classes=num_classes, data_preprocessor=dict(type='YOLOWDetDataPreprocessor'), backbone=dict( delete=True, type='MultiModalYOLOBackbone', image_model={{base.model.backbone}}, text_model=dict( type='HuggingCLIPLanguageBackbone', model_name=text_model_name, frozen_modules=['all'])), neck=dict(type='YOLOWorldPAFPN', guide_channels=text_channels, embed_channels=neck_embed_channels, num_heads=neck_num_heads, block_cfg=dict(type='MaxSigmoidCSPLayerWithTwoConv')), bbox_head=dict(type='YOLOWorldHead', head_module=dict(type='YOLOWorldHeadModule', use_bn_head=True, embed_dims=text_channels, num_classes=num_training_classes)), train_cfg=dict(assigner=dict(num_classes=num_training_classes)))
text_transform = [ dict(type='RandomLoadText', num_neg_samples=(num_classes, num_classes), max_num_samples=num_training_classes, padding_to_max=True, padding_value=''), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', 'flip_direction', 'texts')) ] train_pipeline = [ base.pre_transform, dict(type='MultiModalMosaic', img_scale=base.img_scale, pad_val=114.0, pre_transform=base.pre_transform), dict( type='YOLOv5RandomAffine', max_rotate_degree=0.0, max_shear_degree=0.0, scaling_ratio_range=(1 - base.affine_scale, 1 + base.affine_scale), max_aspect_ratio=base.max_aspect_ratio, border=(-base.img_scale[0] // 2, -base.img_scale[1] // 2), border_val=(114, 114, 114)), base.last_transform[:-1], text_transform, ] train_pipeline_stage2 = [base.train_pipeline_stage2[:-1], *text_transform]
''' obj365v1_train_dataset = dict( type='MultiModalDataset', dataset=dict( type='YOLOv5Objects365V1Dataset', data_root='data/objects365v1/', ann_file='annotations/objects365_train.json', data_prefix=dict(img='train/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/obj365v1_class_texts.json', pipeline=train_pipeline) ''' coco_train_dataset = dict( # delete=True, type='MultiModalDataset', dataset=dict( type='YOLOv5CocoDataset', data_root='data/coco', ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/coco_class_texts.json', pipeline=train_pipeline)
mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) ''' flickr_train_dataset = dict( type='YOLOv5MixedGroundingDataset', data_root='data/flickr/', ann_file='annotations/final_flickr_separateGT_train.json', data_prefix=dict(img='full_images/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=train_pipeline) ''' train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[
# flickr_train_dataset,
coco_train_dataset,
mg_train_dataset
],
ignore_keys=['classes', 'palette']))
test_pipeline = [ *base.test_pipeline[:-1], dict(type='LoadText'), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'pad_param', 'texts')) ] coco_val_dataset = dict( delete=True, type='MultiModalDataset', dataset=dict(type='YOLOv5CocoDataset', data_root='data/coco', test_mode=True, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='val2017/'),
batch_shapes_cfg=None),
class_text_path='data/texts/coco_class_texts.json',
pipeline=test_pipeline)
val_dataloader = dict(dataset=coco_val_dataset) test_dataloader = val_dataloader
val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')
test_evaluator = val_evaluator
default_hooks = dict(param_scheduler=dict(scheduler_type='linear', lr_factor=0.01, max_epochs=max_epochs), checkpoint=dict(max_keep_ckpts=-1, save_best=None, interval=save_epoch_intervals)) custom_hooks = [ dict(type='EMAHook', ema_type='ExpMomentumEMA', momentum=0.0001, update_buffers=True, strict_load=False, priority=49), dict(type='mmdet.PipelineSwitchHook', switch_epoch=max_epochs - close_mosaic_epochs, switch_pipeline=train_pipeline_stage2) ] train_cfg = dict(max_epochs=max_epochs, val_interval=5, dynamic_intervals=[((max_epochs - close_mosaic_epochs), base.val_interval_stage2)]) optim_wrapper = dict(optimizer=dict( delete=True, type='SGD', lr=base_lr, momentum=0.937, nesterov=True, weight_decay=weight_decay, batch_size_per_gpu=train_batch_size_per_gpu), paramwise_cfg=dict( custom_keys={ 'backbone.text_model': dict(lr_mult=0.01), 'logit_scale': dict(weight_decay=0.0) }), constructor='YOLOWv5OptimizerConstructor')
val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')
使用这个配置文件,成功混合COCO和GOA训练。不过才练了5个epoch,COCOval的精度已经不涨了。这五轮epoch训练过程中,AP从没有微调前的45上涨到50,之后就不再变动。超参数可能设的有问题,学习率和batch_size随手填的,用的话注意改一下。 我用训练12轮的模型做了一些测试,开集能力保留的还可以,非COCO的类也基本上都能识别。
非常感谢!我试一试混合GOA和我自己的数据集
非常感谢!我试一试混合GOA和我自己的数据集
你好,可以请问一下你自己制作的数据集是什么格式的吗,我也想用在自己数据集上,大概就是各种工具,螺丝刀剪刀这样,想请问一下你的数据集是什么格式吗?
@dq1125 照着COCO的格式转一份JSON标注文件就好
@dq1125 照着COCO的格式转一份JSON标注文件就好
好的,非常感谢!
@tomgotjack 你好,可以传一份训练的log吗,我想对照一下我的训练过程, 我这边刚开始微调的时候,loss没有明显的下降,想问一下是不是正常现象
接着训练了一段时间,只有grad_norm有明显下降
接着训练了一段时间,只有grad_norm有明显下降
LOSS没有明显下降是正常现象。你可以多练几轮,看看val的精度变化。我自己用的服务器,log没有保留下来。
好的
Epoch(val) [1][2500/2500] coco/bbox_mAP: 0.4730 coco/bbox_mAP_50: 0.6330 coco/bbox_mAP_75: 0.5190 coco/bbox_mAP_s: 0.3170 coco/bbox_mAP_m: 0.5210 coco/bbox_mAP_l: 0.5980 data_time: 0.0009 time: 0.0540
@tomgotjack 你好,为什么我在使用你的配置文件训练时,过了两个epoch,grad_norm变得很大,随后变成0,你知道这是什么原因吗?我也是使用COCO+GQA进行微调,使用YOLOWorldDetector,4张gpu,batchsize_per_gpu=8,base_lr=1e-4。
抱歉,我没有出现这个问题。目前我的环境没有显卡,不方便测试,你再找找其他原因吧
发送自我的盖乐世
-------- 原始信息 -------- 发件人: Ricardoluffy @.> 日期: 2024/7/19 10:18 (GMT+08:00) 收件人: AILab-CVC/YOLO-World @.> 抄送: tomgotjack @.>, Mention @.> 主题: Re: [AILab-CVC/YOLO-World] 可以给一份混合GQA数据集微调COCO的config文件吗? (Issue #299)
@tomgotjackhttps://github.com/tomgotjack 你好,为什么我在使用你的配置文件训练时,过了两个epoch,grad_norm变得很大,随后变成0,你知道这是什么原因吗?我也是使用COCO+GQA进行微调,使用YOLOWorldDetector,4张gpu,batchsize_per_gpu=8,base_lr=1e-4。
― Reply to this email directly, view it on GitHubhttps://github.com/AILab-CVC/YOLO-World/issues/299#issuecomment-2237936492, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMJDFMQTFWGCP7LVVPVNPZDZNBZQTAVCNFSM6AAAAABHKN7HDKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZXHEZTMNBZGI. You are receiving this because you were mentioned.Message ID: @.***>
@tomgotjack 再次请教个问题,我在自己的数据集(28类)进行微调,单独只用自己的数据集没问题,可以正常训练。但是混合GQA之后,就报错了,错误为:
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/mnt/sdc/lishen/conda/envs/yoloWorld/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
不知道为什么会发生这种情况?
@mandyxiaomeng 你好,我这边代码可以跑了。 我使用的是configs/pretrain/yolo_world_v2_l_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_val.py文件,做了一点小改动,代码如下:
base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py') custom_imports = dict(imports=['yolo_world'], allow_failed_imports=False)
hyper-parameters
num_classes = 80 num_training_classes = 80 max_epochs = 30 # Maximum training epochs close_mosaic_epochs = 30 save_epoch_intervals = 2 text_channels = 512 neck_embed_channels = [128, 256, base.last_stage_out_channels // 2] neck_num_heads = [4, 8, base.last_stage_out_channels // 2 // 32] base_lr = 1e-3 weight_decay = 0.0005 train_batch_size_per_gpu = 24 load_from = 'weights/yolo_world_v2_l_obj365v1_goldg_cc3mlite_pretrain-ca93cd1f.pth'
text_model_name = '../pretrained_models/clip-vit-base-patch32-projection'
text_model_name = 'openai/clip-vit-base-patch32' persistent_workers = False
model settings
model = dict( type='YOLOWorldDetector', mm_neck=True, num_train_classes=num_training_classes, num_test_classes=num_classes, data_preprocessor=dict(type='YOLOWDetDataPreprocessor'), backbone=dict( delete=True, type='MultiModalYOLOBackbone', image_model={{base.model.backbone}}, text_model=dict( type='HuggingCLIPLanguageBackbone', model_name=text_model_name, frozen_modules=['all'])), neck=dict(type='YOLOWorldPAFPN', guide_channels=text_channels, embed_channels=neck_embed_channels, num_heads=neck_num_heads, block_cfg=dict(type='MaxSigmoidCSPLayerWithTwoConv')), bbox_head=dict(type='YOLOWorldHead', head_module=dict(type='YOLOWorldHeadModule', use_bn_head=True, embed_dims=text_channels, num_classes=num_training_classes)), train_cfg=dict(assigner=dict(num_classes=num_training_classes)))
dataset settings
text_transform = [ dict(type='RandomLoadText', num_neg_samples=(num_classes, num_classes), max_num_samples=num_training_classes, padding_to_max=True, padding_value=''), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', 'flip_direction', 'texts')) ] train_pipeline = [ base.pre_transform, dict(type='MultiModalMosaic', img_scale=base.img_scale, pad_val=114.0, pre_transform=base.pre_transform), dict( type='YOLOv5RandomAffine', max_rotate_degree=0.0, max_shear_degree=0.0, scaling_ratio_range=(1 - base.affine_scale, 1 + base.affine_scale), max_aspect_ratio=base.max_aspect_ratio, border=(-base.img_scale[0] // 2, -base.img_scale[1] // 2), border_val=(114, 114, 114)), base.last_transform[:-1], _text_transform, ] train_pipeline_stage2 = [_base.train_pipeline_stage2[:-1], *text_transform]
'''
''' obj365v1_train_dataset = dict( type='MultiModalDataset', dataset=dict( type='YOLOv5Objects365V1Dataset', data_root='data/objects365v1/', ann_file='annotations/objects365_train.json', data_prefix=dict(img='train/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/obj365v1_class_texts.json', pipeline=train_pipeline) ''' coco_train_dataset = dict( # delete=True, type='MultiModalDataset', dataset=dict( type='YOLOv5CocoDataset', data_root='data/coco', ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/coco_class_texts.json', pipeline=train_pipeline)
mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) ''' flickr_train_dataset = dict( type='YOLOv5MixedGroundingDataset', data_root='data/flickr/', ann_file='annotations/final_flickr_separateGT_train.json', data_prefix=dict(img='full_images/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=train_pipeline) ''' train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ # obj365v1_train_dataset, # flickr_train_dataset, coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette']))
test_pipeline = [ *base.test_pipeline[:-1], dict(type='LoadText'), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'pad_param', 'texts')) ] coco_val_dataset = dict( delete=True, type='MultiModalDataset', dataset=dict(type='YOLOv5CocoDataset', data_root='data/coco', test_mode=True, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='val2017/'), # ata_prefix=dict(img=''), batch_shapes_cfg=None), class_text_path='data/texts/coco_class_texts.json', pipeline=test_pipeline) val_dataloader = dict(dataset=coco_val_dataset) test_dataloader = val_dataloader
val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')
test_evaluator = val_evaluator
training settings
default_hooks = dict(param_scheduler=dict(scheduler_type='linear', lr_factor=0.01, max_epochs=max_epochs), checkpoint=dict(max_keep_ckpts=-1, save_best=None, interval=save_epoch_intervals)) custom_hooks = [ dict(type='EMAHook', ema_type='ExpMomentumEMA', momentum=0.0001, update_buffers=True, strict_load=False, priority=49), dict(type='mmdet.PipelineSwitchHook', switch_epoch=max_epochs - close_mosaic_epochs, switch_pipeline=train_pipeline_stage2) ] train_cfg = dict(max_epochs=max_epochs, val_interval=5, dynamic_intervals=[((max_epochs - close_mosaic_epochs), base.val_interval_stage2)]) optim_wrapper = dict(optimizer=dict( delete=True, type='SGD', lr=base_lr, momentum=0.937, nesterov=True, weight_decay=weight_decay, batch_size_per_gpu=train_batch_size_per_gpu), paramwise_cfg=dict( custom_keys={ 'backbone.text_model': dict(lr_mult=0.01), 'logit_scale': dict(weight_decay=0.0) }), constructor='YOLOWv5OptimizerConstructor')
evaluation settings
val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')
使用这个配置文件,成功混合COCO和GOA训练。不过才练了5个epoch,COCOval的精度已经不涨了。这五轮epoch训练过程中,AP从没有微调前的45上涨到50,之后就不再变动。超参数可能设的有问题,学习率和batch_size随手填的,用的话注意改一下。 我用训练12轮的模型做了一些测试,开集能力保留的还可以,非COCO的类也基本上都能识别。
@mandyxiaomeng 你好,我这边代码可以跑了。 我使用的是configs/pretrain/yolo_world_v2_l_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_val.py文件,做了一点小改动,代码如下:
base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py') custom_imports = dict(imports=['yolo_world'], allow_failed_imports=False)
hyper-parameters
num_classes = 80 num_training_classes = 80 max_epochs = 30 # Maximum training epochs close_mosaic_epochs = 30 save_epoch_intervals = 2 text_channels = 512 neck_embed_channels = [128, 256, base.last_stage_out_channels // 2] neck_num_heads = [4, 8, base.last_stage_out_channels // 2 // 32] base_lr = 1e-3 weight_decay = 0.0005 train_batch_size_per_gpu = 24 load_from = 'weights/yolo_world_v2_l_obj365v1_goldg_cc3mlite_pretrain-ca93cd1f.pth'
text_model_name = '../pretrained_models/clip-vit-base-patch32-projection'
text_model_name = 'openai/clip-vit-base-patch32' persistent_workers = False
model settings
model = dict( type='YOLOWorldDetector', mm_neck=True, num_train_classes=num_training_classes, num_test_classes=num_classes, data_preprocessor=dict(type='YOLOWDetDataPreprocessor'), backbone=dict( delete=True, type='MultiModalYOLOBackbone', image_model={{base.model.backbone}}, text_model=dict( type='HuggingCLIPLanguageBackbone', model_name=text_model_name, frozen_modules=['all'])), neck=dict(type='YOLOWorldPAFPN', guide_channels=text_channels, embed_channels=neck_embed_channels, num_heads=neck_num_heads, block_cfg=dict(type='MaxSigmoidCSPLayerWithTwoConv')), bbox_head=dict(type='YOLOWorldHead', head_module=dict(type='YOLOWorldHeadModule', use_bn_head=True, embed_dims=text_channels, num_classes=num_training_classes)), train_cfg=dict(assigner=dict(num_classes=num_training_classes)))
dataset settings
text_transform = [ dict(type='RandomLoadText', num_neg_samples=(num_classes, num_classes), max_num_samples=num_training_classes, padding_to_max=True, padding_value=''), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', 'flip_direction', 'texts')) ] train_pipeline = [ base.pre_transform, dict(type='MultiModalMosaic', img_scale=base.img_scale, pad_val=114.0, pre_transform=base.pre_transform), dict( type='YOLOv5RandomAffine', max_rotate_degree=0.0, max_shear_degree=0.0, scaling_ratio_range=(1 - base.affine_scale, 1 + base.affine_scale), max_aspect_ratio=base.max_aspect_ratio, border=(-base.img_scale[0] // 2, -base.img_scale[1] // 2), border_val=(114, 114, 114)), base.last_transform[:-1], _text_transform, ] train_pipeline_stage2 = [_base.train_pipeline_stage2[:-1], *text_transform]
'''
''' obj365v1_train_dataset = dict( type='MultiModalDataset', dataset=dict( type='YOLOv5Objects365V1Dataset', data_root='data/objects365v1/', ann_file='annotations/objects365_train.json', data_prefix=dict(img='train/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/obj365v1_class_texts.json', pipeline=train_pipeline) ''' coco_train_dataset = dict( # delete=True, type='MultiModalDataset', dataset=dict( type='YOLOv5CocoDataset', data_root='data/coco', ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/coco_class_texts.json', pipeline=train_pipeline)
mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) ''' flickr_train_dataset = dict( type='YOLOv5MixedGroundingDataset', data_root='data/flickr/', ann_file='annotations/final_flickr_separateGT_train.json', data_prefix=dict(img='full_images/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=train_pipeline) ''' train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ # obj365v1_train_dataset, # flickr_train_dataset, coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette']))
test_pipeline = [ *base.test_pipeline[:-1], dict(type='LoadText'), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'pad_param', 'texts')) ] coco_val_dataset = dict( delete=True, type='MultiModalDataset', dataset=dict(type='YOLOv5CocoDataset', data_root='data/coco', test_mode=True, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='val2017/'), # ata_prefix=dict(img=''), batch_shapes_cfg=None), class_text_path='data/texts/coco_class_texts.json', pipeline=test_pipeline) val_dataloader = dict(dataset=coco_val_dataset) test_dataloader = val_dataloader
val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')
test_evaluator = val_evaluator
training settings
default_hooks = dict(param_scheduler=dict(scheduler_type='linear', lr_factor=0.01, max_epochs=max_epochs), checkpoint=dict(max_keep_ckpts=-1, save_best=None, interval=save_epoch_intervals)) custom_hooks = [ dict(type='EMAHook', ema_type='ExpMomentumEMA', momentum=0.0001, update_buffers=True, strict_load=False, priority=49), dict(type='mmdet.PipelineSwitchHook', switch_epoch=max_epochs - close_mosaic_epochs, switch_pipeline=train_pipeline_stage2) ] train_cfg = dict(max_epochs=max_epochs, val_interval=5, dynamic_intervals=[((max_epochs - close_mosaic_epochs), base.val_interval_stage2)]) optim_wrapper = dict(optimizer=dict( delete=True, type='SGD', lr=base_lr, momentum=0.937, nesterov=True, weight_decay=weight_decay, batch_size_per_gpu=train_batch_size_per_gpu), paramwise_cfg=dict( custom_keys={ 'backbone.text_model': dict(lr_mult=0.01), 'logit_scale': dict(weight_decay=0.0) }), constructor='YOLOWv5OptimizerConstructor')
evaluation settings
val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')
使用这个配置文件,成功混合COCO和GOA训练。不过才练了5个epoch,COCOval的精度已经不涨了。这五轮epoch训练过程中,AP从没有微调前的45上涨到50,之后就不再变动。超参数可能设的有问题,学习率和batch_size随手填的,用的话注意改一下。 我用训练12轮的模型做了一些测试,开集能力保留的还可以,非COCO的类也基本上都能识别。
你好,请问在这个配置文件里的四个数据集的json和数据都要下载下来吗,还是只需要下载GQA的就可以了?
有coco和GQA就行了,别的不加载。你看看代码加载了什么数据集就行。
发送自我的盖乐世
-------- 原始信息 -------- 发件人: wenqiuL @.> 日期: 2024/8/5 15:29 (GMT+08:00) 收件人: AILab-CVC/YOLO-World @.> 抄送: tomgotjack @.>, Mention @.> 主题: Re: [AILab-CVC/YOLO-World] 可以给一份混合GQA数据集微调COCO的config文件吗? (Issue #299)
@mandyxiaomenghttps://github.com/mandyxiaomeng 你好,我这边代码可以跑了。 我使用的是configs/pretrain/yolo_world_v2_l_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_val.py文件,做了一点小改动,代码如下:
base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py') custom_imports = dict(imports=['yolo_world'], allow_failed_imports=False)
hyper-parameters
num_classes = 80 num_training_classes = 80 max_epochs = 30 # Maximum training epochs close_mosaic_epochs = 30 save_epoch_intervals = 2 text_channels = 512 neck_embed_channels = [128, 256, base.last_stage_out_channels // 2] neck_num_heads = [4, 8, base.last_stage_out_channels // 2 // 32] base_lr = 1e-3 weight_decay = 0.0005 train_batch_size_per_gpu = 24 load_from = 'weights/yolo_world_v2_l_obj365v1_goldg_cc3mlite_pretrain-ca93cd1f.pth'
text_model_name = '../pretrained_models/clip-vit-base-patch32-projection'
text_model_name = 'openai/clip-vit-base-patch32' persistent_workers = False
model settings
model = dict( type='YOLOWorldDetector', mm_neck=True, num_train_classes=num_training_classes, num_test_classes=num_classes, data_preprocessor=dict(type='YOLOWDetDataPreprocessor'), backbone=dict( delete=True, type='MultiModalYOLOBackbone', image_model={{base.model.backbone}}, text_model=dict( type='HuggingCLIPLanguageBackbone', model_name=text_model_name, frozen_modules=['all'])), neck=dict(type='YOLOWorldPAFPN', guide_channels=text_channels, embed_channels=neck_embed_channels, num_heads=neck_num_heads, block_cfg=dict(type='MaxSigmoidCSPLayerWithTwoConv')), bbox_head=dict(type='YOLOWorldHead', head_module=dict(type='YOLOWorldHeadModule', use_bn_head=True, embed_dims=text_channels, num_classes=num_training_classes)), train_cfg=dict(assigner=dict(num_classes=num_training_classes)))
dataset settings
text_transform = [ dict(type='RandomLoadText', num_neg_samples=(num_classes, num_classes), max_num_samples=num_training_classes, padding_to_max=True, padding_value=''), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', 'flip_direction', 'texts')) ] train_pipeline = [ base.pre_transform, dict(type='MultiModalMosaic', img_scale=base.img_scale, pad_val=114.0, pre_transform=base.pre_transform), dict( type='YOLOv5RandomAffine', max_rotate_degree=0.0, max_shear_degree=0.0, scaling_ratio_range=(1 - base.affine_scale, 1 + base.affine_scale), max_aspect_ratio=base.max_aspect_ratio, border=(-base.img_scale[0] // 2, -base.img_scale[1] // 2), border_val=(114, 114, 114)), base.last_transform[:-1], _text_transform, ] train_pipeline_stage2 = [_base.train_pipeline_stage2[:-1], *text_transform]
'''
''' obj365v1_train_dataset = dict( type='MultiModalDataset', dataset=dict( type='YOLOv5Objects365V1Dataset', data_root='data/objects365v1/', ann_file='annotations/objects365_train.json', data_prefix=dict(img='train/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/obj365v1_class_texts.json', pipeline=train_pipeline) ''' coco_train_dataset = dict( # delete=True, type='MultiModalDataset', dataset=dict( type='YOLOv5CocoDataset', data_root='data/coco', ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/coco_class_texts.json', pipeline=train_pipeline)
mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) ''' flickr_train_dataset = dict( type='YOLOv5MixedGroundingDataset', data_root='data/flickr/', ann_file='annotations/final_flickr_separateGT_train.json', data_prefix=dict(img='full_images/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=train_pipeline) ''' train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ # obj365v1_train_dataset, # flickr_train_dataset, coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette']))
test_pipeline = [ *base.test_pipeline[:-1], dict(type='LoadText'), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'pad_param', 'texts')) ] coco_val_dataset = dict( delete=True, type='MultiModalDataset', dataset=dict(type='YOLOv5CocoDataset', data_root='data/coco', test_mode=True, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='val2017/'), # ata_prefix=dict(img=''), batch_shapes_cfg=None), class_text_path='data/texts/coco_class_texts.json', pipeline=test_pipeline) val_dataloader = dict(dataset=coco_val_dataset) test_dataloader = val_dataloader
val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')
test_evaluator = val_evaluator
training settings
default_hooks = dict(param_scheduler=dict(scheduler_type='linear', lr_factor=0.01, max_epochs=max_epochs), checkpoint=dict(max_keep_ckpts=-1, save_best=None, interval=save_epoch_intervals)) custom_hooks = [ dict(type='EMAHook', ema_type='ExpMomentumEMA', momentum=0.0001, update_buffers=True, strict_load=False, priority=49), dict(type='mmdet.PipelineSwitchHook', switch_epoch=max_epochs - close_mosaic_epochs, switch_pipeline=train_pipeline_stage2) ] train_cfg = dict(max_epochs=max_epochs, val_interval=5, dynamic_intervals=[((max_epochs - close_mosaic_epochs), base.val_interval_stage2)]) optim_wrapper = dict(optimizer=dict( delete=True, type='SGD', lr=base_lr, momentum=0.937, nesterov=True, weight_decay=weight_decay, batch_size_per_gpu=train_batch_size_per_gpu), paramwise_cfg=dict( custom_keys={ 'backbone.text_model': dict(lr_mult=0.01), 'logit_scale': dict(weight_decay=0.0) }), constructor='YOLOWv5OptimizerConstructor')
evaluation settings
val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')
使用这个配置文件,成功混合COCO和GOA训练。不过才练了5个epoch,COCOval的精度已经不涨了。这五轮epoch训练过程中,AP从没有微调前的45上涨到50,之后就不再变动。超参数可能设的有问题,学习率和batch_size随手填的,用的话注意改一下。 我用训练12轮的模型做了一些测试,开集能力保留的还可以,非COCO的类也基本上都能识别。
@mandyxiaomenghttps://github.com/mandyxiaomeng 你好,我这边代码可以跑了。 我使用的是configs/pretrain/yolo_world_v2_l_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_val.py文件,做了一点小改动,代码如下:
base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py') custom_imports = dict(imports=['yolo_world'], allow_failed_imports=False)
hyper-parameters
num_classes = 80 num_training_classes = 80 max_epochs = 30 # Maximum training epochs close_mosaic_epochs = 30 save_epoch_intervals = 2 text_channels = 512 neck_embed_channels = [128, 256, base.last_stage_out_channels // 2] neck_num_heads = [4, 8, base.last_stage_out_channels // 2 // 32] base_lr = 1e-3 weight_decay = 0.0005 train_batch_size_per_gpu = 24 load_from = 'weights/yolo_world_v2_l_obj365v1_goldg_cc3mlite_pretrain-ca93cd1f.pth'
text_model_name = '../pretrained_models/clip-vit-base-patch32-projection'
text_model_name = 'openai/clip-vit-base-patch32' persistent_workers = False
model settings
model = dict( type='YOLOWorldDetector', mm_neck=True, num_train_classes=num_training_classes, num_test_classes=num_classes, data_preprocessor=dict(type='YOLOWDetDataPreprocessor'), backbone=dict( delete=True, type='MultiModalYOLOBackbone', image_model={{base.model.backbone}}, text_model=dict( type='HuggingCLIPLanguageBackbone', model_name=text_model_name, frozen_modules=['all'])), neck=dict(type='YOLOWorldPAFPN', guide_channels=text_channels, embed_channels=neck_embed_channels, num_heads=neck_num_heads, block_cfg=dict(type='MaxSigmoidCSPLayerWithTwoConv')), bbox_head=dict(type='YOLOWorldHead', head_module=dict(type='YOLOWorldHeadModule', use_bn_head=True, embed_dims=text_channels, num_classes=num_training_classes)), train_cfg=dict(assigner=dict(num_classes=num_training_classes)))
dataset settings
text_transform = [ dict(type='RandomLoadText', num_neg_samples=(num_classes, num_classes), max_num_samples=num_training_classes, padding_to_max=True, padding_value=''), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', 'flip_direction', 'texts')) ] train_pipeline = [ base.pre_transform, dict(type='MultiModalMosaic', img_scale=base.img_scale, pad_val=114.0, pre_transform=base.pre_transform), dict( type='YOLOv5RandomAffine', max_rotate_degree=0.0, max_shear_degree=0.0, scaling_ratio_range=(1 - base.affine_scale, 1 + base.affine_scale), max_aspect_ratio=base.max_aspect_ratio, border=(-base.img_scale[0] // 2, -base.img_scale[1] // 2), border_val=(114, 114, 114)), base.last_transform[:-1], _text_transform, ] train_pipeline_stage2 = [_base.train_pipeline_stage2[:-1], *text_transform]
'''
''' obj365v1_train_dataset = dict( type='MultiModalDataset', dataset=dict( type='YOLOv5Objects365V1Dataset', data_root='data/objects365v1/', ann_file='annotations/objects365_train.json', data_prefix=dict(img='train/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/obj365v1_class_texts.json', pipeline=train_pipeline) ''' coco_train_dataset = dict( # delete=True, type='MultiModalDataset', dataset=dict( type='YOLOv5CocoDataset', data_root='data/coco', ann_file='annotations/instances_train2017.json', data_prefix=dict(img='train2017/'), filter_cfg=dict(filter_empty_gt=False, min_size=32)), class_text_path='data/texts/coco_class_texts.json', pipeline=train_pipeline)
mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) ''' flickr_train_dataset = dict( type='YOLOv5MixedGroundingDataset', data_root='data/flickr/', ann_file='annotations/final_flickr_separateGT_train.json', data_prefix=dict(img='full_images/'), filter_cfg=dict(filter_empty_gt=True, min_size=32), pipeline=train_pipeline) ''' train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ # obj365v1_train_dataset, # flickr_train_dataset, coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette']))
test_pipeline = [ *base.test_pipeline[:-1], dict(type='LoadText'), dict(type='mmdet.PackDetInputs', meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', 'pad_param', 'texts')) ] coco_val_dataset = dict( delete=True, type='MultiModalDataset', dataset=dict(type='YOLOv5CocoDataset', data_root='data/coco', test_mode=True, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='val2017/'), # ata_prefix=dict(img=''), batch_shapes_cfg=None), class_text_path='data/texts/coco_class_texts.json', pipeline=test_pipeline) val_dataloader = dict(dataset=coco_val_dataset) test_dataloader = val_dataloader
val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')
test_evaluator = val_evaluator
training settings
default_hooks = dict(param_scheduler=dict(scheduler_type='linear', lr_factor=0.01, max_epochs=max_epochs), checkpoint=dict(max_keep_ckpts=-1, save_best=None, interval=save_epoch_intervals)) custom_hooks = [ dict(type='EMAHook', ema_type='ExpMomentumEMA', momentum=0.0001, update_buffers=True, strict_load=False, priority=49), dict(type='mmdet.PipelineSwitchHook', switch_epoch=max_epochs - close_mosaic_epochs, switch_pipeline=train_pipeline_stage2) ] train_cfg = dict(max_epochs=max_epochs, val_interval=5, dynamic_intervals=[((max_epochs - close_mosaic_epochs), base.val_interval_stage2)]) optim_wrapper = dict(optimizer=dict( delete=True, type='SGD', lr=base_lr, momentum=0.937, nesterov=True, weight_decay=weight_decay, batch_size_per_gpu=train_batch_size_per_gpu), paramwise_cfg=dict( custom_keys={ 'backbone.text_model': dict(lr_mult=0.01), 'logit_scale': dict(weight_decay=0.0) }), constructor='YOLOWv5OptimizerConstructor')
evaluation settings
val_evaluator = dict(delete=True, type='mmdet.CocoMetric', proposal_nums=(100, 1, 10), ann_file='data/coco/annotations/instances_val2017.json', metric='bbox')
使用这个配置文件,成功混合COCO和GOA训练。不过才练了5个epoch,COCOval的精度已经不涨了。这五轮epoch训练过程中,AP从没有微调前的45上涨到50,之后就不再变动。超参数可能设的有问题,学习率和batch_size随手填的,用的话注意改一下。 我用训练12轮的模型做了一些测试,开集能力保留的还可以,非COCO的类也基本上都能识别。
你好,请问在这个配置文件里的四个数据集的json和数据都要下载下来吗,还是只需要下载GQA的就可以了?
― Reply to this email directly, view it on GitHubhttps://github.com/AILab-CVC/YOLO-World/issues/299#issuecomment-2268367108, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMJDFMTSKRKDPLDBQPLKTF3ZP4SV7AVCNFSM6AAAAABHKN7HDKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRYGM3DOMJQHA. You are receiving this because you were mentioned.Message ID: @.***>
@tomgotjack 好的,我根据以下代码发现了可以只加载QGA数据 train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[
# flickr_train_dataset,
coco_train_dataset,
mg_train_dataset
],
ignore_keys=['classes', 'palette']))
但是问题是 data.md提供的下载链接似乎已经失效了,找不到final_mixed_train_no_coco.json,能否方便提供一下这部分内容的链接呢?非常感谢 mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline)
抱歉,我已经几个月没碰这个项目了,目前也没有相关资料存下来。你自己多找找吧,相信我能找到的东西都很容易找到
发送自我的盖乐世
-------- 原始信息 -------- 发件人: wenqiuL @.> 日期: 2024/8/5 15:38 (GMT+08:00) 收件人: AILab-CVC/YOLO-World @.> 抄送: tomgotjack @.>, Mention @.> 主题: Re: [AILab-CVC/YOLO-World] 可以给一份混合GQA数据集微调COCO的config文件吗? (Issue #299)
@tomgotjackhttps://github.com/tomgotjack 好的,我根据以下代码发现了可以只加载QGA数据 train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[
coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette'])) 但是问题是 data.md提供的下载链接似乎已经失效了,找不到final_mixed_train_no_coco.json,能否方便提供一下这部分内容的链接呢?非常感谢 mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline)
― Reply to this email directly, view it on GitHubhttps://github.com/AILab-CVC/YOLO-World/issues/299#issuecomment-2268383412, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMJDFMSJI7SQ72ZIAEYYQ5LZP4TY5AVCNFSM6AAAAABHKN7HDKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRYGM4DGNBRGI. You are receiving this because you were mentioned.Message ID: @.***>
我尝试给yolo_world_v2_l_vlpan_bn_2e-4_80e_8gpus_mask-refine_finetune_coco.py中直接添加 mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset', data_root='data/mixed_grounding/', ann_file='annotations/final_mixed_train_no_coco.json', data_prefix=dict(img='gqa/images/'), filter_cfg=dict(filter_empty_gt=False, min_size=32), pipeline=train_pipeline) 并把train_dataloader替换为 train_dataloader = dict(batch_size=train_batch_size_per_gpu, collate_fn=dict(type='yolow_collate'), dataset=dict(delete=True, type='ConcatDataset', datasets=[ coco_train_dataset, mg_train_dataset ], ignore_keys=['classes', 'palette'])) 这样搞没法运行。可以给一份config文件参考一下吗?