NVIDIA-AI-IOT / nanosam

A distilled Segment Anything (SAM) model capable of running real-time with NVIDIA TensorRT
Apache License 2.0
616 stars 52 forks source link

trtexec fails to build /mobile_sam_mask_decoder.onnx #16

Open fdarvas opened 8 months ago

fdarvas commented 8 months ago

Trying to run:

trtexec --onnx=data/mobile_sam_mask_decoder.onnx --saveEngine=data/mobile_sam_mask_decoder.engine --minShapes=point_coords:1x1x2,point_labels:1x1 --optShapes=point_coords:1x1x2,point_labels:1x1 --maxShapes=point_coords:1x10x2,point_labels:1x10

after successfully exporting mobile_sam_mask_decoder.onnx with: python3 -m nanosam.tools.export_sam_mask_decoder_onnx --model-type=vit_t --checkpoint=assets/mobile_sam.pt --output=/mnt/e/data/mobile_sam_mask_decoder.onnx

resulting in this error:

onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [12/18/2023-11:39:43] [E] Error[4]: [graph.cpp::symbolicExecute::539] Error Code 4: Internal Error (/OneHot: an IIOneHotLayer cannot be used to compute a shape tensor) [12/18/2023-11:39:43] [E] [TRT] ModelImporter.cpp:771: While parsing node number 146 [Tile -> "/Tile_output_0"]: [12/18/2023-11:39:43] [E] [TRT] ModelImporter.cpp:772: --- Begin node --- [12/18/2023-11:39:43] [E] [TRT] ModelImporter.cpp:773: input: "/Unsqueeze_3_output_0" input: "/Reshape_2_output_0" output: "/Tile_output_0" name: "/Tile" op_type: "Tile"

[12/18/2023-11:39:43] [E] [TRT] ModelImporter.cpp:774: --- End node --- [12/18/2023-11:39:43] [E] [TRT] ModelImporter.cpp:777: ERROR: ModelImporter.cpp:195 In function parseGraph: [6] Invalid Node - /Tile [graph.cpp::symbolicExecute::539] Error Code 4: Internal Error (/OneHot: an IIOneHotLayer cannot be used to compute a shape tensor) [12/18/2023-11:39:43] [E] Failed to parse onnx file [12/18/2023-11:39:43] [I] Finished parsing network model. Parse time: 0.32614 [12/18/2023-11:39:43] [E] Parsing model failed [12/18/2023-11:39:43] [E] Failed to create engine from model or file. [12/18/2023-11:39:43] [E] Engine set up failed

Awesome0324 commented 6 months ago

I had the same problem. Have you solved it yet?

fdarvas commented 6 months ago

Unfortunately I dont have a solution for it yet.

Rich2020 commented 6 months ago

Bump...

Same issue - any help would be much appreciated; thanks!

fwcore commented 5 months ago

Two possible workarounds (use either one):

The resulted ONNX files obtained by above have no OneHot op, and can be converted to TensorRT with no problem.


More details

Using Netron to inspect the file, the ONNX file converted by the following command has OneHot op.

python3 -m nanosam.tools.export_sam_mask_decoder_onnx --model-type=vit_t --checkpoint=assets/mobile_sam.pt --output=/mnt/e/data/mobile_sam_mask_decoder.onnx

However, the ONNX file provided by the google drive link in README.md does not have OneHot op. It seems to be replaced by some constant tensors and where op. I don't know how this file is converted from scratch.

binh234 commented 5 months ago

Two possible workarounds (use either one):