-
I am unable to do inference. I am running the command :
python infer.py dataset_dec_dev-other/ --task audio_pretraining --nbest 1 --path wav2vec_vox_960h_pl.pt --gen-subset train --results-path resu…
-
## 🚀 Feature
Given that there is a lack of small and comprehensive audio tasks, I would propose to add a speech MNIST dataset to torch audio.
## Motivation
In the audio domain, we often lack s…
-
### Question
I trained Transformer (Transformer encoder + Transformer criterion) model from **Wav2letter v0.2**.
Unfortunately, I should use **flashlight-consolidated's Wav2letter** (Due to some u…
-
Unable to use KenLM rescore due to missing logprobs on transcribe.
**Steps/Code to reproduce the bug**
1. Cloned the repo [7916269](https://github.com/NVIDIA/NeMo/commit/79162696ea8c48734a260dd2…
-
### Question
I have tried the installation instructions [here](https://github.com/flashlight/wav2letter/wiki/Building-Python-bindingsl) , with USE_CUDA=0. However, It still gives the following error:…
at3e updated
9 months ago
-
I run inference streaming code in CUDA environment (flashlight and wav2letter with CUDA) but results was the same between cpu and cuda.
And i have a question, how to run inference with CUDA?
Thanks!…
-
In the scense of service applications, we need to load model one time and serve for client request.
But How can i do this?
-
Hello,
I was able to perform training using libri-100-clean database and I got the expected results.
Currently I am trying to train a transformer acoustic model using 1K data of Libri. I am usin…
-
I cannot train or decode without getting an mkldnn error, as shown below. I am using the pre-built CPU backend docker image. W2l runs perfectly fine on a different computer. I believe the problem is t…
-
### Question
As your Important disclaimer, I accessed at flashlight (https://github.com/facebookresearch/flashlight) and build completely.
At the stage of python bindings, I built flashlight and wan…