-
Hi, this is more of a feature request than a bug: it would be great if your audio demos worked on iOS! 🙏🥺
iOS requires a little bit of extra code (user authentication request) in order for audio to…
-
Hi Hugging Face Team.
First of all thank you for your work, I am a fan of Transformers. 😉
I'm opening this issue because I'm having trouble using the RagSequenceForGeneration, model that I'm pa…
-
Hi,
I am currently trying to run this model - facebook/wav2vec2-xls-r-2b-22-to-16
https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16
The example code given using the pipeline is giving si…
-
## Environment info
- transformers version: 4.11.0
- Platform: Google Colab
- Python version: 3.7.12
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who…
piegu updated
2 years ago
-
## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-1047-azure-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.9.0a0+2ecb2c7 (True)
- Tensorf…
-
Hi,
First of all thanks for this great work, I did not expect python to be this fast for such tasks.
I am trying to use the decoder with logits of BPE vocabulary, But my BPE notation is different t…
-
Hi
After spending some time comparing the different outputs between the FairSeq (FS) and HuggingFace (HF) model a couple of things have come to light. Probably the most significant is the HF model …
-
Hi @VibhuJawa ,
As we discussed in the chat, it seems there was an problem that `subword_tokenize` would not handle special tokens (e.g., `[CLS]`, `[SEP]`) correctly, and I'm creating this github i…
-
**Is your feature request related to a problem? Please describe.**
I'm having a lot of difficulty understanding how to include a tokenizer in the model from the documentation. Trying to load tokenize…
-
Hi all,
Thanks for this great contribution :)
I was using the module to build a WordPiece vocab, using a very big txt file as input (115GB).
Loading the data and tokenizing the words worked …