-
# 🚀 Faster batch translation with FSMT model
Currently, generating translations for multiple inputs at once is very slow using Transformers' `FSMTForConditionalGeneration` implementation. In fact i…
-
Good evening here,
This looks awesome!
I'm trying to get a transcription from the pre-trained French for a `.wav` file of **53 secs.**
Here's my code:
```python
from speechbrain.pretrained im…
-
I have trained seq2seq AM on Hindi Devanagari data, and a kenlm on devanagari corpus. The results are satisfactory when I am Decoding.
I want to take inference using the inference docker with simple_…
-
I am trying to train a seq2seq model using BartModel. As per BartTokenizer documentation if I pass tgt_texts then it should return decoder_attention_mask and decoder_input_ids please check the attachm…
-
# âť“ Questions & Help
## Details
One peculiar finding is that when we ran the rag-sequence-nq model along with the provided wiki_dpr index, all models and index files were used as is, on the …
-
## Environment info
- `transformers` version: 4.5.0.dev0
- deepspeed version: 0.3.13
- Platform: Linux-4.15.0-66-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch vers…
-
Attention was originally proposed in the context of Machine Translation, so it makes since to include it in our list of tasks.
Components:
- Select dataset
- Add Recurrent seq2seq model
- Add Tr…
-
deep mind's new paper "Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron" , use new attention "GMM attention" ,they find improves generalization to long utterances, Do …
-
## Environment info
- `transformers` version: 4.1.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?):…
-
## Environment info
- `transformers` version: '3.4.0'
- Platform: Colab
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: yes
- Us…