IDEA-Research / DAB-DETR

[ICLR 2022] Official implementation of the paper "DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR"
Apache License 2.0
501 stars 86 forks source link

Can the inference test script be released? #10

Closed luomi1024 closed 2 years ago

luomi1024 commented 2 years ago

Can the inference test script be released? Further, release scripts that output json format.

thanks.

SlongLiu commented 2 years ago

Thanks for your suggestions. we will clean and release them later.

ola0x commented 2 years ago

Hello, can the inference script be released?

luomi1024 commented 2 years ago

可以自己写一个json输出,我是加了一个--test指令,然后进行了coco json格式化输出。 模仿evaluate方法,修改一个test结构的,以字典形式保存,json.dump(res)就可以了。

SaifeiYan commented 2 years ago

Can you provide me with a script code to infer my own dataset?

SaifeiYan commented 2 years ago

可以自己写一个json输出,我是加了一个--test指令,然后进行了coco json格式化输出。 模仿evaluate方法,修改一个test结构的,以字典形式保存,json.dump(res)就可以了。

你的推理代码能发我一个吗

SaifeiYan commented 2 years ago

Can the inference test script be released? Further, release scripts that output json format.

thanks. Did you find the inference code?

luomi1024 commented 2 years ago

使用命令如下: python .\test.py -m dab_deformable_detr --coco_path xxx --transformer_activation relu --output_dir result/test --batch_size 2 --resume logs/xxx/checkpoint.pth --test https://github.com/luomi1024/vit_tr_test/blob/main/test.py

SaifeiYan commented 2 years ago

使用命令如下: python .\test.py -m dab_deformable_detr --coco_path xxx --transformer_activation relu --output_dir result/test --batch_size 2 --resume logs/xxx/checkpoint.pth --test https://github.com/luomi1024/vit_tr_test/blob/main/test.py 请问一下为什么会出现下面的错误啊 RuntimeError: Error(s) in loading state_dict for DABDeformableDETR: Missing key(s) in state_dict: "transformer.decoder.bbox_embed.6.layers.0.weight", "transformer.decoder.bbox_embed.6.layers.0.bias", "transformer.decoder.bbox_embed.6.layers.1.weight", "transformer.decoder.bbox_embed.6.layers.1.bias", "transformer.decoder.bbox_embed.6.layers.2.weight", "transformer.decoder.bbox_embed.6.layers.2.bias", "transformer.decoder.class_embed.0.weight", "transformer.decoder.class_embed.0.bias", "transformer.decoder.class_embed.1.weight", "transformer.decoder.class_embed.1.bias", "transformer.decoder.class_embed.2.weight", "transformer.decoder.class_embed.2.bias", "transformer.decoder.class_embed.3.weight", "transformer.decoder.class_embed.3.bias", "transformer.decoder.class_embed.4.weight", "transformer.decoder.class_embed.4.bias", "transformer.decoder.class_embed.5.weight", "transformer.decoder.class_embed.5.bias", "transformer.decoder.class_embed.6.weight", "transformer.decoder.class_embed.6.bias", "transformer.enc_output.weight", "transformer.enc_output.bias", "transformer.enc_output_norm.weight", "transformer.enc_output_norm.bias", "transformer.pos_trans.weight", "transformer.pos_trans.bias", "transformer.pos_trans_norm.weight", "transformer.pos_trans_norm.bias", "class_embed.6.weight", "class_embed.6.bias", "bbox_embed.6.layers.0.weight", "bbox_embed.6.layers.0.bias", "bbox_embed.6.layers.1.weight", "bbox_embed.6.layers.1.bias", "bbox_embed.6.layers.2.weight", "bbox_embed.6.layers.2.bias". Unexpected key(s) in state_dict: "tgt_embed.weight", "refpoint_embed.weight".

luomi1024 commented 2 years ago

你训练的网络 是否是 dab_deformable_detr ,checkpoint.pth是否是训练后生成的(含权重和模型,可以看看pth的大小,一般是大于500MB) 从log看是缺少权重内容。

SaifeiYan commented 2 years ago

对的,我使用的是dab_deformable_detr,而且也是使用的训练生成的checkpoint.pth 这个是我的测试语句!python Predict.py -m dab_deformable_detr --coco_path ../coco --output_dir result/test --batch_size 2 --resume="logs/dab_deformable_detr/R50/checkpoint.pth" --test 你能提供一下你的全部代码给我吗

luomi1024 commented 2 years ago

我只改了这部分代码,我刚试了,没有问题可以运行。我的命令如下供参考: python .\test.py -m dab_deformable_detr --coco_path data/coco --transformer_activation relu --output_dir result/test101 --batch_size 2 --resume logs/dab_deformable_detr/R50/checkpoint0249.pth --test

luomi1024 commented 2 years ago

哦,可以和main.py 函数对比一下,我修改过一些配置,你看看是不是和你的main.py 网络参数不一致,如dcn之类的。 谢谢!

SaifeiYan commented 2 years ago

好的,我看一下,多谢了

SaifeiYan commented 2 years ago

哦,可以和main.py 函数对比一下,我修改过一些配置,你看看是不是和你的main.py 网络参数不一致,如dcn之类的。 谢谢!

你好,我的engine.py中没有evaluate_test的定义,你能把这个发我一下吗

luomi1024 commented 2 years ago

可以参考,其实很简单,模仿evaluate 改一个临时用。 @torch.no_grad() def evaluate_test(model, criterion, postprocessors, data_loader, base_ds, device, output_dir, wo_class_error=False, args=None, logger=None): try: need_tgt_for_training = args.use_dn except: need_tgt_for_training = False

model.eval()
criterion.eval()

metric_logger = utils.MetricLogger(delimiter="  ")
if not wo_class_error:
    metric_logger.add_meter('class_error', utils.SmoothedValue(window_size=1, fmt='{value:.2f}'))
header = 'Test:'

iou_types = tuple(k for k in ('segm', 'bbox') if k in postprocessors.keys())
coco_evaluator = CocoEvaluator(base_ds, iou_types)

panoptic_evaluator = None
if 'panoptic' in postprocessors.keys():
    panoptic_evaluator = PanopticEvaluator(
        data_loader.dataset.ann_file,
        data_loader.dataset.ann_folder,
        output_dir=os.path.join(output_dir, "panoptic_eval"),
    )
out_list = []

for samples, targets in metric_logger.log_every(data_loader, 10, header, logger=logger):
    samples = samples.to(device)
    # import ipdb; ipdb.set_trace()
    # targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
    targets = [{k: to_device(v, device) for k, v in t.items()} for t in targets]
    # kwargs = {} if args.eval_options is None else args.eval_options
    with torch.cuda.amp.autocast(enabled=args.amp):
        if need_tgt_for_training:
            outputs = model(samples, targets)
        else:
            outputs = model(samples)
    orig_target_sizes = torch.stack([t["orig_size"] for t in targets], dim=0)
    results = postprocessors['bbox'](outputs, orig_target_sizes)
    res = {target['image_id'].item(): output for target, output in zip(targets, results)}
    out_list.append(res)
return out_list
SaifeiYan commented 2 years ago

可以参考,其实很简单,模仿evaluate 改一个临时用。 @torch.no_grad() def evaluate_test(model, criterion, postprocessors, data_loader, base_ds, device, output_dir, wo_class_error=False, args=None, logger=None): try: need_tgt_for_training = args.use_dn except: need_tgt_for_training = False

model.eval()
criterion.eval()

metric_logger = utils.MetricLogger(delimiter="  ")
if not wo_class_error:
    metric_logger.add_meter('class_error', utils.SmoothedValue(window_size=1, fmt='{value:.2f}'))
header = 'Test:'

iou_types = tuple(k for k in ('segm', 'bbox') if k in postprocessors.keys())
coco_evaluator = CocoEvaluator(base_ds, iou_types)

panoptic_evaluator = None
if 'panoptic' in postprocessors.keys():
    panoptic_evaluator = PanopticEvaluator(
        data_loader.dataset.ann_file,
        data_loader.dataset.ann_folder,
        output_dir=os.path.join(output_dir, "panoptic_eval"),
    )
out_list = []

for samples, targets in metric_logger.log_every(data_loader, 10, header, logger=logger):
    samples = samples.to(device)
    # import ipdb; ipdb.set_trace()
    # targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
    targets = [{k: to_device(v, device) for k, v in t.items()} for t in targets]
    # kwargs = {} if args.eval_options is None else args.eval_options
    with torch.cuda.amp.autocast(enabled=args.amp):
        if need_tgt_for_training:
            outputs = model(samples, targets)
        else:
            outputs = model(samples)
    orig_target_sizes = torch.stack([t["orig_size"] for t in targets], dim=0)
    results = postprocessors['bbox'](outputs, orig_target_sizes)
    res = {target['image_id'].item(): output for target, output in zip(targets, results)}
    out_list.append(res)
return out_list

大哥,试了很多次还是不行,请问一下可以把你的程序除了你的数据集之外,共享给我一下吗,多谢了

SlongLiu commented 2 years ago

I provide a notebook for inference and visualization of a single image in https://github.com/IDEA-opensource/DAB-DETR/blob/main/inference_and_visualize.ipynb. I hope this file is helpful to you.