-
one simple zh-CN sentence costs `1.32 sec` and the result is not right.
```
>python normalize.py --text="123" --language=en
INFO:NeMo-text-processing:one hundred and twenty three
WARNING:NeMo-te…
-
I successfully run the EN context-aware TN in the documentation.
[wfst_lm_rescoring.py](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/wfst/wfst_text_norm…
-
**Bug Detail**
Hello, I'm an engineer who makes a voice recognition model through your wenet library. For your honor, we successfully made our e2e model with reasonable WER. As a result of using vari…
ghost updated
8 months ago
-
**Describe the bug**
When running `merge_lora_weights/merge.py` with TP and PP set to 1 on a fine-tuned minitron checkpoint, I run into the following error:
```sh
raise RuntimeError(f"world_size ({w…
-
Hi, my name is Nathan and I would like to implement wfst LM on CTC decoding.
So I referred to run.sh located in 'librispeech/s0'
I set
**nbpe=8000**
bpemode=unigram
And also i ignore log "Fai…
-
**Output of `docker version`:**
```
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 05:02:53 2016
OS/Arch: linux/amd6…
nagua updated
11 months ago
-
接 #1713
您好,我这边确认了,runtime/libtorch的GPU编译是没问题的,使用prefix beam search解码也是有用到GPU的,GPU利用率也有30%以上,解码速度也比较快。
问题在于,加了TLG之后,会非常慢(使用的TLG大小是870M),且GPU利用率长期为0(偶尔会瞬间变成9%)。
打印出了一些中间log如下:
I0307 08:30:37.921…
-
In our own dataset, we compared cudaFST with the HLG(k2), and the detailed results are as follows:
![image](https://github.com/nvidia-riva/riva-asrlib-decoder/assets/42910032/74adf78d-66f4-4aba-af1e-…
-
**Describe the bug**
When the WFS 2.0 implementation is being tested against the version 1.40, there are 6 failures in official website (https://cite.opengeospatial.org/teamengine/). All failures are…
-
while decoding , i got the error. I have finished building TLG. There seems some error in decoder_main
> perl: warning: Setting locale failed.
> perl: warning: Please check that your locale set…