-
### 🚀 The feature
I’d like to propose the integration of tree-constrained pointer generator (TCPGen) [1] and Minimum Biasing Word Error (MBWE) training [2] for contextual biasing into torchaudio pack…
-
I'm confused by the Class Conv2dSubampling in convolution.py.What does the second return output_lengths mean?
-
Hi, thanks for working on this and sharing on github. The setup is very easy to follow and helps draw useful conclusions. I would like to suggest some improvement for the future:
1. As I was playin…
ghost updated
9 months ago
-
Traceback (most recent call last):
File "tests/test_warprnnt_op.py", line 3, in
from warprnnt_tensorflow import rnnt_loss
File "/NEWAI/Speech/Member/gaoxinglong/bin/anaconda3/lib/python3…
-
This issue is to track the follow-up work to #1137, which introduced `rnnt_loss` and `RNNTLoss` as a [prototype](https://pytorch.org/audio/stable/index.html) in `torchaudio.prototype.transducer` using…
-
I try to intersect output lattice (H) and LG and obtained the result using the 'intersect_device' function.
But, sometimes it outputs empty hyp.
```
segment: 0
text:
```
Can you give any …
-
I have trained a custom model, need to know how you guys have built the NN LM?
-
-
```
import torch
import numpy as np
from warprnnt_pytorch import RNNTLoss
acts = np.random.rand(2,2,3,5)
labels = np.array([[1, 2],[2,2]])
act_length = np.array([2,2])
label_length = np.arra…
-
Hi,
There is a problem about training a conformer+RNN-T model.
How about the cer and wer with one GPU?
I'm train the model on one RTX TITAN GPU, training the conformer(encoder layers 16, encoder …