Closed ma-siddiqui closed 4 years ago
What is the framework of your model? If you are using --strict-model-config=false
to let Triton autofill the model config, the resulting config is depending on how much information the framework can provide. See detail
Actually, I converted model using torch2trt and cuda engine is being used as tensorrt_plan. Output nodes are shown but input nodes are not visible.
Now facing the errors below:
[TensorRT] ERROR: INVALID_ARGUMENT: Cannot find binding of given name: input_0 [TensorRT] ERROR: INVALID_ARGUMENT: Cannot find binding of given name: input_1
Any thougts?
Thanks, Muhammad Ajmal Siddiqui
I think the model is not converted properly, I would suggest to open a ticket against torch2trt to verify.
Hi, I converted model and successfully loaded in inference server r20.03. I did not provide any configuration (config.pbtxt). On trt server console it does not show any input details. Only output names are shown. When I provide inputs (input_0) explicitly in config, model loading is failed with error invalid input.
Please advise how to troubleshoot it further and what is possible reason of this issue.
Thanks, Muhammad Ajmal Siddiqui