k2-fsa / sherpa

Speech-to-text server framework with next-gen Kaldi
https://k2-fsa.github.io/sherpa
Apache License 2.0
477 stars 97 forks source link

Problem of Sherpa Online Decoding Inference #509

Open lucy9527 opened 7 months ago

lucy9527 commented 7 months ago

Here is the script for my decoding inference: The model 'jit_script_chunk_64_left_512.pt' was trained using ZipFormer.

sherpa/bin/sherpa-online \
      --tokens=$model_dir/tokens.txt \
      --nn-model=$model_dir/jit_script_chunk_64_left_512.pt \
      --decoding-method=modified_beam_search \
      --use-gpu=false \
      --padding-seconds=1.2 \
      $wav_list

In sherpa-online, besides the parameters mentioned above, are there any other parameters whose adjustment significantly affects the recognition results? (I'm not familiar with the parameter settings.)