SubtitleEdit / subtitleedit

the subtitle editor :)
http://www.nikse.dk/SubtitleEdit/Help
GNU General Public License v3.0
8.7k stars 908 forks source link

NLLB Error while translating: "thammegowda-nllb-serve" requires a web server running locally! #8852

Open LeitHunt opened 1 month ago

LeitHunt commented 1 month ago

Followed this Tutorial Getting Error: "thammegowda-nllb-serve" requires a web server running locally! Trying to Translate Japanese to English

Screenshot 2024-09-24 200543

In my CMD:

Microsoft Windows [Version 10.0.22631.4169] (c) Microsoft Corporation. All rights reserved.

C:\Users\xxxx>nllb-serve INFO:root:torch device=cuda INFO:root:Loading model facebook/nllb-200-distilled-600M ... INFO:root:Loading default tokenizer for facebook/nllb-200-distilled-600M ... C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( INFO:root:System Info: ${'transformer': '4.44.2', 'Python Version': '3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]', 'Platform': 'Windows-10-10.0.22631-SP0', 'Platform Version': '10.0.22631', 'Processor': 'Intel64 Family 6 Model 158 Stepping 10, GenuineIntel', 'GPU': "_CudaDeviceProperties(name='NVIDIA GeForce GTX 1650', major=7, minor=5, total_memory=4095MB, multi_processor_count=16)", 'Cuda Version': '12.4', 'model_id': 'facebook/nllb-200-distilled-600M'}

Tried to find Solution Online and on ChatGPT..

ChatGPT: The error you're encountering indicates that the NllbTokenizerFast object does not have the attribute lang_code_to_id. This typically suggests that the tokenizer you are using is either outdated or improperly initialized.

Here are some steps to troubleshoot and resolve the issue:

  1. Update Transformers Library: Ensure that you have the latest version of the transformers library. You can update it using pip:

pip install --upgrade transformers

  1. Check Tokenizer Initialization: Make sure you are correctly initializing the tokenizer. It should look something like this:

from transformers import NllbTokenizerFast

tokenizer = NllbTokenizerFast.from_pretrained("facebook/nllb-200-distilled-600M")

  1. Accessing Language Codes: Instead of lang_code_to_id, you might need to access language codes differently. You can check available language codes with:

print(tokenizer.get_langs())

  1. Update Your Code: If lang_code_to_id is not available, check the documentation for the latest method to get the language IDs or update the way you handle language parameters in your translation request.

  2. Check Documentation and Issues: Visit the official Hugging Face documentation or the GitHub repository for any changes in the API or for reported issues related to the NllbTokenizerFast.

If after these steps the error persists, please provide additional context or code snippets for further diagnosis.

matsbar commented 4 weeks ago

I can confirm that this happens to me too; both in Windows 11 Pro, and Linux Mint 22 Xfce. I suspect that we are too low on RAM, I too have only 4GB, same as you.

zerofdest commented 2 days ago

It has nothing to do with RAM. The true reason is that the version of transformers is too high for NLLB-serve so the solution is: Try downgrading to an older version, such as 4.37.0 pip install transformers==4.37.0