Open XUJiahua opened 6 months ago
其他模型可以么?
有没有试过 int8.onnx ?
你是 macos 么?
我是 MacOS。 sherpa-onnx-streaming-zipformer-ctc-multi-zh-hans-2023-12-13.tar.bz2 这个模型可以。 paraformer int8 也是一样的问题。
./bin/sherpa-onnx \
--provider=coreml \
--tokens=./sherpa-onnx-streaming-paraformer-bilingual-zh-en/tokens.txt \
--paraformer-encoder=./sherpa-onnx-streaming-paraformer-bilingual-zh-en/encoder.int8.onnx \
--paraformer-decoder=./sherpa-onnx-streaming-paraformer-bilingual-zh-en/decoder.int8.onnx \
./sherpa-onnx-streaming-paraformer-bilingual-zh-en/test_wavs/0.wav
OnlineRecognizerConfig(feat_config=FeatureExtractorConfig(sampling_rate=16000, feature_dim=80, low_freq=20, high_freq=-400, dither=0), model_config=OnlineModelConfig(transducer=OnlineTransducerModelConfig(encoder="", decoder="", joiner=""), paraformer=OnlineParaformerModelConfig(encoder="./sherpa-onnx-streaming-paraformer-bilingual-zh-en/encoder.int8.onnx", decoder="./sherpa-onnx-streaming-paraformer-bilingual-zh-en/decoder.int8.onnx"), wenet_ctc=OnlineWenetCtcModelConfig(model="", chunk_size=16, num_left_chunks=4), zipformer2_ctc=OnlineZipformer2CtcModelConfig(model=""), nemo_ctc=OnlineNeMoCtcModelConfig(model=""), tokens="./sherpa-onnx-streaming-paraformer-bilingual-zh-en/tokens.txt", num_threads=1, warm_up=0, debug=False, provider="coreml", model_type="", modeling_unit="cjkchar", bpe_vocab=""), lm_config=OnlineLMConfig(model="", scale=0.5), endpoint_config=EndpointConfig(rule1=EndpointRule(must_contain_nonsilence=False, min_trailing_silence=2.4, min_utterance_length=0), rule2=EndpointRule(must_contain_nonsilence=True, min_trailing_silence=1.2, min_utterance_length=0), rule3=EndpointRule(must_contain_nonsilence=False, min_trailing_silence=0, min_utterance_length=20)), ctc_fst_decoder_config=OnlineCtcFstDecoderConfig(graph="", max_active=3000), enable_endpoint=True, max_active_paths=4, hotwords_score=1.5, hotwords_file="", decoding_method="greedy_search", blank_penalty=0, temperature_scale=2)
2024-05-22 16:12:14.407 sherpa-onnx[12946:1936497] 2024-05-22 16:12:14.407639 [E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running CoreML_18336092227849758783_3 node. Name:'CoreMLExecutionProvider_CoreML_18336092227849758783_3_3' Status Message: Error executing model: Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).
libc++abi: terminating due to uncaught exception of type Ort::Exception
[1] 12946 abort ./bin/sherpa-onnx --provider=coreml
这个问题我解决不了,不好意思。
谢谢及时反馈!
可能就是 onnx runtime 的 coreml provider 有问题。我找到另一个使用 onnx runtime 推理 paraformer 模型的例子,遇到一样的问题。 https://github.com/RapidAI/RapidASR/blob/main/cpp_onnx/readme.md
你自己导出这个模型试试?
我试试,有导出脚本可参考么,我看需要将原 pytorch 模型拆成 encoder, decoder 后分别导出。
请问一下解决了吗?
reproduce: