-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
A warning about dynamo_backend is displayed in the …
-
👋🏻 Hi, my ultimate goal is to compile my own T5-based models so that they work on Inferentia instances on AWS. As a test of this, I've been trying to run through this tutorial,
https://awsdocs-neu…
-
### Have you read the latest version of the FAQ?
- [X] I have visited the FAQ page right now and my issue is not present there
### Is there an existing issue for this?
- [X] I have searched the exi…
-
I have added AI translation to my fork, Helsinki-NLP https://huggingface.co/Helsinki-NLP
But I also added a language selection item there.
you can use a multi model for translation, I just don't k…
-
When I use newest version > 3.0, I will encounter this error for converted MarianMT model. When I downgrade to 2.24, it works fine.
-
# 🚀 Feature request
## Motivation
Huggingface just released a huge pile of pretrained translation models. I just want to train a completely custom model on a custom language pair, without pr…
-
## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-3.10.0-1160.53.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyTorch version (GPU?): 1.10.1 (False)
- Te…
-
Hi,
Thank you for this great repository!
My issue is more a query: Are conversion and quantization of a Hugging Face Transformer (MarianMT model) harware dependant?
I have converted and quant…
-
Hi,
When using converted model of a MarianMT transformer model with or without quantization on RTX 3090 GPU, I didn't notice latency/throughput improvement of quantized models compared to default m…
-
I am running Python FastAPI in a docker container via VSCode's devcontainer setup. I am using the pre-trained MarianMT model for translation of chinese_simple to english and running into an error when…