Describe the bug
The quartznet onnx exported model cannot be loaded for inference. The code snippet below runs fine up until encoder_session = onnxruntime.InferenceSession(filename), at that point the code just hangs, no error but hangs. I left it for ~ 10mins.
See code snippet below. The model generated from the first config works for me, the model generated by the second config does not work for me.
Urgency
We would like to use this in a product being launched later this year. Speeding up inference would be very helpful.
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
ONNX Runtime installed from (source or binary): binary
Describe the bug The quartznet onnx exported model cannot be loaded for inference. The code snippet below runs fine up until
encoder_session = onnxruntime.InferenceSession(filename)
, at that point the code just hangs, no error but hangs. I left it for ~ 10mins.See code snippet below. The model generated from the first config works for me, the model generated by the second config does not work for me.
Urgency We would like to use this in a product being launched later this year. Speeding up inference would be very helpful.
System information
To Reproduce
Expected behavior The inference session for the second config should load, just as well as the first session.