Open we11as22 opened 1 month ago
sherpa-onnx crashes colab(
Could you be more specific?
onnx_enc_model_fname = onnx_model_path + "/" + 'encoder.onnx' onnx_dec_model_fname = onnx_model_path + "/" + 'decoder.onnx' onnx_joint_model_fname = onnx_model_path + "/" + 'joint.onnx'
model.encoder.export(onnx_enc_model_fname) model.decoder.export(onnx_dec_model_fname) model.joint.export(onnx_joint_model_fname)
model_file = "/content/tokenizer_all_sets/tokenizer.model"
vocab_file = "/content/tokenizer.vocab"#model_file.replace(".model", ".vocab")
sp = spm.SentencePieceProcessor() sp.Load(model_file) vocabs = [sp.IdToPiece(id) for id in range(sp.GetPieceSize())] with open(vocab_file, "w") as vfile: for v in vocabs: id = sp.PieceToId(v) vfile.write(f"{v}\t{sp.GetScore(id)}\n") print(f"Vocabulary file is written to {vocab_file}")
encoder = onnx_enc_model_fname decoder = onnx_dec_model_fname joiner = onnx_joint_model_fname tokens = "/content/tokenizer_all_sets/vocab.txt" bpe_vocab="/content/tokenizer_all_sets/tokenizer.vocab"
recognizer = sherpa_onnx.OfflineRecognizer.from_transducer( encoder=encoder, decoder=decoder, joiner=joiner, tokens=tokens, bpe_vocab=bpe_vocab, feature_dim=64, num_threads=1, provider="cpu", sample_rate=16000, debug=True )
*colab is dead
I suggest that you learn how to ask a question so that others can help you.
@we11as22 Please see https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/nemo/russian.html
We have supported it in sherpa-onnx
How can I launch GigaAM_RNNT using ONNX? sherpa-onnx crashes colab( Please, help me with this