-
### Question
I have tried the installation instructions [here](https://github.com/flashlight/wav2letter/wiki/Building-Python-bindingsl) , with USE_CUDA=0. However, It still gives the following error:…
at3e updated
11 months ago
-
It would be nice to use a `RangeDim` directly in the `mb.TensorSpec()` shape here instead of the symbol indirection:
```
flexible = ct.RangeDim()
@mb.program(input_specs=[mb.TensorSpec(shape=(1, …
-
## 🚀 Feature
Given that there is a lack of small and comprehensive audio tasks, I would propose to add a speech MNIST dataset to torch audio.
## Motivation
In the audio domain, we often lack s…
-
The layout of the conv2d and linear layers in your encoder end up cramping the output tokens:
```
forward torch.Size([32, 1600, 80])
self.encode torch.Size([32, 99, 1216])
self.linear …
-
Hello,
I was able to perform training using libri-100-clean database and I got the expected results.
Currently I am trying to train a transformer acoustic model using 1K data of Libri. I am usin…
-
I am unable to do inference. I am running the command :
python infer.py dataset_dec_dev-other/ --task audio_pretraining --nbest 1 --path wav2vec_vox_960h_pl.pt --gen-subset train --results-path resu…
-
I run inference streaming code in CUDA environment (flashlight and wav2letter with CUDA) but results was the same between cpu and cuda.
And i have a question, how to run inference with CUDA?
Thanks!…
-
### Question
As your Important disclaimer, I accessed at flashlight (https://github.com/facebookresearch/flashlight) and build completely.
At the stage of python bindings, I built flashlight and wan…
-
In the scense of service applications, we need to load model one time and serve for client request.
But How can i do this?
-
### Question
I'm using Ubuntu 18.04 and using wav2letter v0.2 branch. I've successfully compile and built wav2letter on my machine.
Now I'm working on building inference example as a standalone c++ …