Open d-c-a-i opened 1 year ago
Hi,I have been stuck here recently. Do you have any demo so far? Think you!
I am try to transfer grounding-dino to onnx, but it seems like difficult, because of multi-model in Grounding dino Module and NestedTensor
i try to use torch.onnx.export to transfer, but it always problem:
torch.onnx.symbolic_registry.UnsupportedOperatorError: Exporting the operator ::__ior_ to ONNX opset version 13 is not supported
anyone know which part of model make this problem.
any updates here?
I had to disable checkpointing in order to prevent another error, but now stuck at the same place. Seems like the aten::_ior op is not yet supported by torch onnx opset.
https://pytorch.org/docs/stable/onnx_supported_aten_ops.html
I got the same error. Has anyone succeeded in converting to onnx or torchscript model?
here is the onnx I have convert:
here is the onnx I have convert:
it's a difficult way, modify many codes(contains torch framework),just now it's only support fixed shape:
- img: 800x800
- token: 1x5 ( it's really two words without the head,tail and dot)
- and I have speed up performance for the model at this fixed shape
Is it possible to modify the input of ONNX to more words? I tried to infer the onnx you provided, and found that its input is too limited, for example, the single word "watermelon" has exceeded [1,5]
performance killer, I have fix for the fixed shape situation:
HIi @oylz ! I met this error: input: "/transformer/enc_out_class_embed/Unsqueeze_3_output_0" input: "/transformer/enc_out_class_embed/Unsqueeze_4_output_0" input: "/transformer/enc_out_class_embed/Unsqueeze_5_output_0" input: "/transformer/enc_out_class_embed/Constant_7_output_0" output: "/transformer/enc_out_class_embed/Slice_output_0" name: "/transformer/enc_out_class_embed/Slice" op_type: "Slice" [07/10/2023-12:45:07] [E] [TRT] ModelImporter.cpp:774: --- End node --- [07/10/2023-12:45:07] [E] [TRT] ModelImporter.cpp:776: ERROR: builtin_op_importers.cpp:4493 In function importSlice: [8] Assertion failed: (axes.allValuesKnown()) && "This version of TensorRT does not support dynamic axes."=====would you give me some advices?
here is the onnx I have convert:
it's a difficult way, modify many codes(contains torch framework),just now it's only support fixed shape:
- img: 800x800
- token: 1x5 ( it's really two words without the head,tail and dot)
- and I have speed up performance for the model at this fixed shape
Hi @oylz ~ Great jobs! Can you share the code? And why not support batch inference?
here is the onnx I have convert:
it's a difficult way, modify many codes(contains torch framework),just now it's only support fixed shape:
- img: 800x800
- token: 1x5 ( it's really two words without the head,tail and dot)
- and I have speed up performance for the model at this fixed shape
Can you provide an infer.py for test your onnx?
@oylz
Hello, I don't know if you can show your code example of converting onnx, I want to learn your code. Thank you so much
@oylz Hello, could you please show your onnx converting code? Thanks a lot.
Hello, I found onnx convertor for GrondingDino model here https://blog.openvino.ai/blog-posts/enable-openvino-tm-optimization-for-groundingdino as an intermediate step in converting to openvino model: https://github.com/wenyi5608/GroundingDINO/blob/main/demo/export_openvino.py
I get grounded.onnx and think how to convert it into tensorRT now
Hello, I found onnx convertor for GrondingDino model here https://blog.openvino.ai/blog-posts/enable-openvino-tm-optimization-for-groundingdino as an intermediate step in converting to openvino model: https://github.com/wenyi5608/GroundingDINO/blob/main/demo/export_openvino.py
I get grounded.onnx and think how to convert it into tensorRT now
Hello, I tried using this as well, but I was not able to successfully export it. Could you please tell me about your environment setup? I'm not sure if the failure was due to a version issue. Thank you!
I make all like described in https://blog.openvino.ai/blog-posts/enable-openvino-tm-optimization-for-groundingdino
and get -rw-rw-r-- 1 cnn cnn 694130127 сен 27 17:58 grounded.onnx
Hello, I found onnx convertor for GrondingDino model here https://blog.openvino.ai/blog-posts/enable-openvino-tm-optimization-for-groundingdino as an intermediate step in converting to openvino model: https://github.com/wenyi5608/GroundingDINO/blob/main/demo/export_openvino.py
I get grounded.onnx and think how to convert it into tensorRT now
hello, I want to use onnx with using gpu have you ever work gpu with your onnx model ? i encountered this error when i try
/usr/local/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider
my local env are:
pip freeze --local addict==2.4.0 certifi==2023.7.22 charset-normalizer==3.2.0 cmake==3.27.5 coloredlogs==15.0.1 contourpy==1.1.1 cycler==0.11.0 defusedxml==0.7.1 filelock==3.12.4 flatbuffers==23.5.26 fonttools==4.42.1 fsspec==2023.9.2 -e git+https://github.com/wenyi5608/GroundingDINO.git@a3256bbca7fd365d1240d1830eaf8c13987e666d#egg=groundingdino huggingface-hub==0.17.3 humanfriendly==10.0 idna==3.4 importlib-metadata==6.8.0 Jinja2==3.1.2 jstyleson==0.0.2 kiwisolver==1.4.5 lit==17.0.1 MarkupSafe==2.1.3 matplotlib==3.8.0 mpmath==1.3.0 networkx==3.1 numpy==1.26.0 nvidia-cublas-cu11==11.10.3.66 nvidia-cuda-cupti-cu11==11.7.101 nvidia-cuda-nvrtc-cu11==11.7.99 nvidia-cuda-runtime-cu11==11.7.99 nvidia-cudnn-cu11==8.5.0.96 nvidia-cufft-cu11==10.9.0.58 nvidia-curand-cu11==10.2.10.91 nvidia-cusolver-cu11==11.4.0.1 nvidia-cusparse-cu11==11.7.4.91 nvidia-nccl-cu11==2.14.3 nvidia-nvtx-cu11==11.7.91 onnx==1.14.1 onnxruntime==1.16.0 opencv-python==4.8.0.76 openvino==2023.1.0.dev20230811 openvino-dev==2023.1.0.dev20230811 openvino-telemetry==2023.1.1 packaging==23.1 Pillow==10.0.1 platformdirs==3.10.0 protobuf==4.24.3 pycocotools==2.0.7 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.31.0 safetensors==0.3.3 scipy==1.10.1 six==1.16.0 supervision==0.6.0 sympy==1.12 texttable==1.6.7 timm==0.9.7 tokenizers==0.13.3 tomli==2.0.1 torch==2.0.1 torchaudio==2.0.2 torchvision==0.15.2 tqdm==4.66.1 transformers==4.33.2 triton==2.0.0 typing_extensions==4.8.0 urllib3==2.0.5 yapf==0.40.2 zipp==3.17.0 I make all like described in https://blog.openvino.ai/blog-posts/enable-openvino-tm-optimization-for-groundingdino
and get -rw-rw-r-- 1 cnn cnn 694130127 сен 27 17:58 grounded.onnx
Thanks!!!!!
my local env are:
pip freeze --local addict==2.4.0 certifi==2023.7.22 charset-normalizer==3.2.0 cmake==3.27.5 coloredlogs==15.0.1 contourpy==1.1.1 cycler==0.11.0 defusedxml==0.7.1 filelock==3.12.4 flatbuffers==23.5.26 fonttools==4.42.1 fsspec==2023.9.2 -e git+https://github.com/wenyi5608/GroundingDINO.git@a3256bbca7fd365d1240d1830eaf8c13987e666d#egg=groundingdino huggingface-hub==0.17.3 humanfriendly==10.0 idna==3.4 importlib-metadata==6.8.0 Jinja2==3.1.2 jstyleson==0.0.2 kiwisolver==1.4.5 lit==17.0.1 MarkupSafe==2.1.3 matplotlib==3.8.0 mpmath==1.3.0 networkx==3.1 numpy==1.26.0 nvidia-cublas-cu11==11.10.3.66 nvidia-cuda-cupti-cu11==11.7.101 nvidia-cuda-nvrtc-cu11==11.7.99 nvidia-cuda-runtime-cu11==11.7.99 nvidia-cudnn-cu11==8.5.0.96 nvidia-cufft-cu11==10.9.0.58 nvidia-curand-cu11==10.2.10.91 nvidia-cusolver-cu11==11.4.0.1 nvidia-cusparse-cu11==11.7.4.91 nvidia-nccl-cu11==2.14.3 nvidia-nvtx-cu11==11.7.91 onnx==1.14.1 onnxruntime==1.16.0 opencv-python==4.8.0.76 openvino==2023.1.0.dev20230811 openvino-dev==2023.1.0.dev20230811 openvino-telemetry==2023.1.1 packaging==23.1 Pillow==10.0.1 platformdirs==3.10.0 protobuf==4.24.3 pycocotools==2.0.7 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.31.0 safetensors==0.3.3 scipy==1.10.1 six==1.16.0 supervision==0.6.0 sympy==1.12 texttable==1.6.7 timm==0.9.7 tokenizers==0.13.3 tomli==2.0.1 torch==2.0.1 torchaudio==2.0.2 torchvision==0.15.2 tqdm==4.66.1 transformers==4.33.2 triton==2.0.0 typing_extensions==4.8.0 urllib3==2.0.5 yapf==0.40.2 zipp==3.17.0 I make all like described in https://blog.openvino.ai/blog-posts/enable-openvino-tm-optimization-for-groundingdino and get -rw-rw-r-- 1 cnn cnn 694130127 сен 27 17:58 grounded.onnx
Thanks!!!!!
Thanks to your reply but I mean how to convert onnx to tensorrt ( engine ) not torch to onnx
here is the onnx I have convert:
it's a difficult way, modify many codes(contains torch framework),just now it's only support fixed shape:
- img: 800x800
- token: 1x5 ( it's really two words without the head,tail and dot)
- and I have speed up performance for the model at this fixed shape
hi i want to download the onnx model ,but the google cloud need the right to download ,i request some times,but not yet got the permission,so how to download the onnx model directly or can you agree my request on goole cloud my username is 898926172@qq.com @oylz
Hello, I found onnx convertor for GrondingDino model here https://blog.openvino.ai/blog-posts/enable-openvino-tm-optimization-for-groundingdino as an intermediate step in converting to openvino model: https://github.com/wenyi5608/GroundingDINO/blob/main/demo/export_openvino.py I get grounded.onnx and think how to convert it into tensorRT now
hello, I want to use onnx with using gpu have you ever work gpu with your onnx model ? i encountered this error when i try
/usr/local/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider
Hi @minuenergy , dose the onnx model you got support the inputs with dynamic scales? I got a bug when modifying the size of the input image.
[E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running Reshape node. Name:'/backbone/backbone.0/layers.0/blocks.1/attn/Reshape_1' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:44 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1419,3,49,49}, requested shape:{1,1247,3,49,49}
performance killer, I have fix for the fixed shape situation:
HIi @oylz ! I met this error: input: "/transformer/enc_out_class_embed/Unsqueeze_3_output_0" input: "/transformer/enc_out_class_embed/Unsqueeze_4_output_0" input: "/transformer/enc_out_class_embed/Unsqueeze_5_output_0" input: "/transformer/enc_out_class_embed/Constant_7_output_0" output: "/transformer/enc_out_class_embed/Slice_output_0" name: "/transformer/enc_out_class_embed/Slice" op_type: "Slice" [07/10/2023-12:45:07] [E] [TRT] ModelImporter.cpp:774: --- End node --- [07/10/2023-12:45:07] [E] [TRT] ModelImporter.cpp:776: ERROR: builtin_op_importers.cpp:4493 In function importSlice: [8] Assertion failed: (axes.allValuesKnown()) && "This version of TensorRT does not support dynamic axes."=====would you give me some advices?
I met the same problem, is there a solution now?
performance killer, I have fix for the fixed shape situation:
HIi @oylz ! I met this error: input: "/transformer/enc_out_class_embed/Unsqueeze_3_output_0" input: "/transformer/enc_out_class_embed/Unsqueeze_4_output_0" input: "/transformer/enc_out_class_embed/Unsqueeze_5_output_0" input: "/transformer/enc_out_class_embed/Constant_7_output_0" output: "/transformer/enc_out_class_embed/Slice_output_0" name: "/transformer/enc_out_class_embed/Slice" op_type: "Slice" [07/10/2023-12:45:07] [E] [TRT] ModelImporter.cpp:774: --- End node --- [07/10/2023-12:45:07] [E] [TRT] ModelImporter.cpp:776: ERROR: builtin_op_importers.cpp:4493 In function importSlice: [8] Assertion failed: (axes.allValuesKnown()) && "This version of TensorRT does not support dynamic axes."=====would you give me some advices?
I met the same problem, is there a solution now?
My error is :
[12/15/2023-10:39:37] [E] [TRT] ModelImporter.cpp:726: While parsing node number 3432 [Slice -> "onnx::Slice_3626"]:
[12/15/2023-10:39:37] [E] [TRT] ModelImporter.cpp:727: --- Begin node ---
[12/15/2023-10:39:37] [E] [TRT] ModelImporter.cpp:728: input: "onnx::Slice_3616"
input: "onnx::Slice_22788"
input: "onnx::Slice_3622"
input: "onnx::Slice_22789"
input: "onnx::Slice_3625"
output: "onnx::Slice_3626"
name: "Slice_3432"
op_type: "Slice"
[12/15/2023-10:39:37] [E] [TRT] ModelImporter.cpp:729: --- End node ---
[12/15/2023-10:39:37] [E] [TRT] ModelImporter.cpp:732: ERROR: builtin_op_importers.cpp:4531 In function importSlice:
[8] Assertion failed: (axes.allValuesKnown()) && "This version of TensorRT does not support dynamic axes."
Hi, I am a little stuck on how to use TensorRT to speed up GroundingDINO inference. GroundingDINO takes in both an image and text prompt and I am a bit lost on how to convert the text prompt to tensor. Can someone please give me some example code or suggestions on how to make it work? Thank you!
Brother, have you transferred it out? GroundingDINO to Tensorrt
Hi, I am a little stuck on how to use TensorRT to speed up GroundingDINO inference. GroundingDINO takes in both an image and text prompt and I am a bit lost on how to convert the text prompt to tensor. Can someone please give me some example code or suggestions on how to make it work? Thank you! I have successfully transferred the engine file, but the output accuracy and pytorch cannot be aligned. May I ask if you have resolved it?
Hi, I am a little stuck on how to use TensorRT to speed up GroundingDINO inference. GroundingDINO takes in both an image and text prompt and I am a bit lost on how to convert the text prompt to tensor. Can someone please give me some example code or suggestions on how to make it work? Thank you!
Brother, have you transferred it out? GroundingDINO to Tensorrt
哥们儿请问你现在有tensorrt的G-dino了吗?
Hi, I am a little stuck on how to use TensorRT to speed up GroundingDINO inference. GroundingDINO takes in both an image and text prompt and I am a bit lost on how to convert the text prompt to tensor. Can someone please give me some example code or suggestions on how to make it work? Thank you!