I have developped a quantization method for neural networks using onnx and I wanted to test it with the TIDLExecutionProvider as I need to deploy it for one of my customer.
I have used the custom-model-onnx and I uploaded the following model based on [Mobilenet].
mobilenetv2-folded-quant.zip
As soon as I try to load it with the inference engine of TI.
The jupyter kernel dies from the benchmarker dies
I have then tried with a non quantized model but with no more success, the jupyter dies as soon as I try to use the TIDLExecutionProvider
mobilenetv2_fold.zip
Don't really know how to move forward.
Thanks for the help as I need to deploy for customers in coming weeks
Hi,
I have an issue using the https://dev.ti.com/edgeaisession/ and the edgeai-benchmark.
I have developped a quantization method for neural networks using onnx and I wanted to test it with the TIDLExecutionProvider as I need to deploy it for one of my customer.
I have used the custom-model-onnx and I uploaded the following model based on [Mobilenet]. mobilenetv2-folded-quant.zip
As soon as I try to load it with the inference engine of TI. The jupyter kernel dies from the benchmarker dies
I have then tried with a non quantized model but with no more success, the jupyter dies as soon as I try to use the TIDLExecutionProvider mobilenetv2_fold.zip
Don't really know how to move forward.
Thanks for the help as I need to deploy for customers in coming weeks