DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
error:
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Scan node. Name:'custom_rnn_scan_Scan__25' Status Message: Non-zero status code returned while running GreaterOrEqual node. Name:'bidirectional_rnn/bw/bw/while/GreaterEqual_2' Status Message: /onnxruntime_src/include/onnxruntime/core/framework/op_kernel_context.h:42 const T* onnxruntime::OpKernelContext::Input(int) const [with T = onnxruntime::Tensor] Missing Input: bidirectional_rnn/bw/ToInt32:0
error: onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Scan node. Name:'custom_rnn_scan_Scan__25' Status Message: Non-zero status code returned while running GreaterOrEqual node. Name:'bidirectional_rnn/bw/bw/while/GreaterEqual_2' Status Message: /onnxruntime_src/include/onnxruntime/core/framework/op_kernel_context.h:42 const T* onnxruntime::OpKernelContext::Input(int) const [with T = onnxruntime::Tensor] Missing Input: bidirectional_rnn/bw/ToInt32:0
could you give some tips for inferencing onnx?