IDEA-Research / DAB-DETR

[ICLR 2022] Official implementation of the paper "DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR"
Apache License 2.0
498 stars 86 forks source link

Convert DAB-deformable-DETR to ONNX #57

Open sazani opened 1 year ago

sazani commented 1 year ago

I am trying to convert the generated model that I trained and also your pretrained model to ONNX but unfortunately I faced the following error message:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0, and cpu! (when checking argument for argument index in method wrapper__index_select) By the way I have used static and dynamic input and I have used the following code:

import torch.onnx import os, sys import torch import numpy as np

from models import build_dab_deformable_detr from util.slconfig import SLConfig import torchvision import torchvision.transforms.functional as TF

from PIL import Image import transforms as T

import cv2 import argparse device = torch.device('cuda:0' )

if name == "main": parser = argparse.ArgumentParser() parser.add_argument('--model_checkpoint_path', help="change the path of the model checkpoint.", default="./Checkpoints/checkpoint.pt") parser.add_argument('--model_config_path', help="change the path of the model config file", default="./Checkpoints/config.json") args = parser.parse_args() model_config_path = args.model_config_path model_checkpoint_path = args.model_checkpoint_path args_config = SLConfig.fromfile(model_config_path) model, criterion, post_processors = build_dab_deformable_detr(args_config) checkpoint = torch.load(model_checkpoint_path, map_location=device) model.load_state_dict(checkpoint['model']) model = model.to(device) img_size =[1080,1920] input = torch.zeros(1, 3, *img_size) input = input.to(device) model.eval() results =model(input) torch.onnx.export( model, input, "test.onnx", input_names=["input"], output_names=["output"], export_params=True, opset_version=11, # I have also tried version 12,13,14,15

dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'}, # shape(1,3,640,640)

    #               'output': {0: 'batch', 1: 'anchors'}  # shape(1,25200,85)
    #               } ,#if dynamic else None
dynamic_axes = None,
)
Zalways commented 1 year ago

have you solve this problem? i try to export the model into torchscript,(device is cpu),i met some problem: I finally got the model through the trace method, but there was a problem with this model,and it cann't work! RuntimeError: The size of tensor a (32) must match the size of tensor b (237) at non-singleton dimension 1 i need your help! so have you met the error like me?appreciate if you could give me some advice!thanks!