huggingface / transformers

๐Ÿค— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
135.38k stars 27.09k forks source link

[Bug] KeyError: 'nllb-moe' when trying to load `nllb-moe-54b` model #22461

Closed NanoCode012 closed 1 year ago

NanoCode012 commented 1 year ago

System Info

Who can help?

@ArthurZucker from https://github.com/huggingface/transformers/pull/22024

Information

Tasks

Reproduction

Following example script on https://huggingface.co/facebook/nllb-moe-54b (but pointing to local git copy),

  1. pip install git+https://github.com/huggingface/transformers.git
  2. python
    >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
    >>> tokenizer = AutoTokenizer.from_pretrained("../hub/nllb-moe-54b")
    >>> model = AutoModelForSeq2SeqLM.from_pretrained("../hub/nllb-moe-54b")
    Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_pretrained
    config, kwargs = AutoConfig.from_pretrained(
    File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 920, in from_pretrained
    config_class = CONFIG_MAPPING[config_dict["model_type"]]
    File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 626, in __getitem__
    raise KeyError(key)
    KeyError: 'nllb_moe'

Note: The system might not have enough RAM, but this errored immediately after reaching it and does not seem like OOM.

Expected behavior

It can load model.

ArthurZucker commented 1 year ago

That's completely right! The config.model_type should be nllb-moe instead of nllb_moe. Will modify this in the checkpoints and in the code. Thanks for reporting!

NanoCode012 commented 1 year ago

@ArthurZucker , hello!

I noticed that and have also attempted that, but I got the same error weirdly. I will try it again later.

It is the config.json right?

ArthurZucker commented 1 year ago

Yes the config.json was wrong!

NanoCode012 commented 1 year ago

Hello @ArthurZucker , sorry for bothering you again.

I have git pull the latest Huggingface repo and still got same error.

>>> tokenizer = AutoTokenizer.from_pretrained("../hub/nllb-moe-54b", use_auth_token=True, src_lang="eng_Latn")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("../hub/nllb-moe-54b")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_pretrained
    config, kwargs = AutoConfig.from_pretrained(
  File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 920, in from_pretrained
    config_class = CONFIG_MAPPING[config_dict["model_type"]]
  File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 626, in __getitem__
    raise KeyError(key)
KeyError: 'nllb-moe'

Do I need to install from your branch https://github.com/huggingface/transformers/pull/22470?

Edit: Oh, it was just merged 1 min ago.

ArthurZucker commented 1 year ago

This is normal! You need to update your config.json file

ArthurZucker commented 1 year ago

If you were using a hub model, it would automatically update. The PR fixes the default value but for models that were already downloaded you need to update the config

NanoCode012 commented 1 year ago

If you were using a hub model, it would automatically update. The PR fixes the default value but for models that were already downloaded you need to update the config

Yes, I tried both 1) Updated config.json 2) git pull the downloaded HF repo with the model https://huggingface.co/facebook/nllb-moe-54b/commit/83c96e4658a2e02c182d0ab794229301862791ee (not the transformers).

I'm not sure if it cached the config.json somewhere?

Edit: Will pip install latest transformer from source.

NanoCode012 commented 1 year ago

Hm, I have pip install from source and also confirmed that config.json got updated.

Unpacking objects: 100% (3/3), 342 bytes | 0 bytes/s, done.
From https://huggingface.co/facebook/nllb-moe-54b
   59fc265..83c96e4  main       -> origin/main
Updating 59fc265..83c96e4
Fast-forward
 config.json | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
image
NanoCode012 commented 1 year ago

Hello @ArthurZucker , sorry for bothering you again.

I have git pull the latest Huggingface repo and still got same error.

>>> tokenizer = AutoTokenizer.from_pretrained("../hub/nllb-moe-54b", use_auth_token=True, src_lang="eng_Latn")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("../hub/nllb-moe-54b")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_pretrained
    config, kwargs = AutoConfig.from_pretrained(
  File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 920, in from_pretrained
    config_class = CONFIG_MAPPING[config_dict["model_type"]]
  File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 626, in __getitem__
    raise KeyError(key)
KeyError: 'nllb-moe'

Do I need to install from your branch #22470?

Edit: Oh, it was just merged 1 min ago.

I just saw. The key error is now nllb-moe. It is not the same error as the first post which was nllb_moe.

ArthurZucker commented 1 year ago

Okay, let me have another look!

NanoCode012 commented 1 year ago

Okay, let me have another look!

Sorry for disturbing. Thank you very much!

ArthurZucker commented 1 year ago

So, running this model = AutoModelForSeq2SeqLM.from_pretrained("hf-internal-testing/random-nllb-moe-2-experts") definitely worked for me.

In [3]: model = AutoModelForSeq2SeqLM.from_pretrained("hf-internal-testing/random-nllb-moe-2-experts")
Downloading (โ€ฆ)lve/main/config.json: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.40k/1.40k [00:00<00:00, 272kB/s]
Downloading (โ€ฆ)model.bin.index.json: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 91.5k/91.5k [00:00<00:00, 992kB/s]
Downloading (โ€ฆ)00001-of-00002.bin";: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7.75G/7.75G [02:04<00:00, 62.0MB/s]
Downloading (โ€ฆ)00002-of-00002.bin";: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 9.36G/9.36G [02:17<00:00, 68.0MB/s]
Downloading shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [04:23<00:00, 131.96s/it]
Loading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:11<00:00,  5.82s/it]

In [4]: 

The issue is most probably related to the config/ the cache! But still will look into it. In the mean time use the model directly ๐Ÿ˜‰

NanoCode012 commented 1 year ago

Hello @ArthurZucker , thank you for info!

huaxueqi commented 1 year ago

Hello ไฝ ๅฅฝไฝ ๅฅฝ@ArthurZucker , thank you for info!๏ผŒ่ฐข่ฐขไฝ ็š„ไฟกๆฏ๏ผ๏ผŒ่ฐข่ฐขไฝ ็š„ไฟกๆฏ๏ผ

Is the problem solved?

NanoCode012 commented 1 year ago

Hello ไฝ ๅฅฝไฝ ๅฅฝ@ArthurZucker , thank you for info!๏ผŒ่ฐข่ฐขไฝ ็š„ไฟกๆฏ๏ผ๏ผŒ่ฐข่ฐขไฝ ็š„ไฟกๆฏ๏ผ

Is the problem solved?

Hey! I have not tried this yet. I think it could've been fixed. I probably had some caching issue with packages.

I have not been recently able to get a machine to run this yet.

huaxueqi commented 1 year ago

Hello ไฝ ๅฅฝไฝ ๅฅฝ@ArthurZucker , thank you for info!๏ผŒ่ฐข่ฐขไฝ ็š„ไฟกๆฏ๏ผ๏ผŒ่ฐข่ฐขไฝ ็š„ไฟกๆฏ๏ผ

Is the problem solved?

Hey! I have not tried this yet. I think it could've been fixed. I probably had some caching issue with packages.

I have not been recently able to get a machine to run this yet.

I have the same problem, I think changing the config file "nllb-moe" is not the solution, I tried many times, I am not cached, the first time I use

ArthurZucker commented 1 year ago

Hey! Really sorry but I can't reproduce this now : https://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharing

This notebook works as expected out of the box. I am pretty sur it is either:

huaxueqi commented 1 year ago

Hey! Really sorry but I can't reproduce this now : ๅ˜ฟ๏ผ็œŸ็š„ๅพˆๆŠฑๆญ‰๏ผŒไฝ†ๆˆ‘็Žฐๅœจๆ— ๆณ•้‡็Žฐ่ฟ™ไธช๏ผšๅ˜ฟ๏ผ็œŸ็š„ๅพˆๆŠฑๆญ‰๏ผŒไฝ†ๆˆ‘็Žฐๅœจๆ— ๆณ•้‡็Žฐ่ฟ™ไธช๏ผšhttps://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharinghttps://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharinghttps://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharing

This notebook works as expected out of the box.ๆญค็ฌ”่ฎฐๆœฌๅผ€็ฎฑๅณ็”จ๏ผŒๆŒ‰้ข„ๆœŸๅทฅไฝœใ€‚ๆญค็ฌ”่ฎฐๆœฌๅผ€็ฎฑๅณ็”จ๏ผŒๆŒ‰้ข„ๆœŸๅทฅไฝœใ€‚ I am pretty sur it is either: ๆˆ‘ๅพˆ็กฎๅฎšๅฎƒๆ˜ฏ๏ผš ๆˆ‘ๅพˆ็กฎๅฎšๅฎƒๆ˜ฏ๏ผš

  • you are not using the ๆ‚จๆฒกๆœ‰ไฝฟ็”จ ๆ‚จๆฒกๆœ‰ไฝฟ็”จ main transformers branch ๅ˜ๅŽ‹ๅ™จๅˆ†ๆ”ฏ ๅ˜ๅŽ‹ๅ™จๅˆ†ๆ”ฏ
  • your file is not well definedๆ‚จ็š„ๆ–‡ไปถๆœชๆ˜Ž็กฎๅฎšไน‰ๆ‚จ็š„ๆ–‡ไปถๆœชๆ˜Ž็กฎๅฎšไน‰

Thanks, I'm tryingใ€‚I see that your model is "hf-internal-testing/random-nllb-moe-2-experts" ใ€‚Can you try the "facebook/nllb-moe-54b" model?

ArthurZucker commented 1 year ago

Just did, it works the same

huaxueqi commented 1 year ago

OK๏ผŒThanks, I'm trying

4drawing95 commented 1 year ago

image I have the same problem. I downloaded it separately and tried to make it work directly, but it still didn't work. Any idea when this will be fixed?

huaxueqi commented 1 year ago

image I have the same problem. I downloaded it separately and tried to make it work directly, but it still didn't work. Any idea when this will be fixed? ๆˆ‘ๆœ‰ๅŒๆ ท็š„้—ฎ้ข˜ใ€‚ๆˆ‘ๅ•็‹ฌไธ‹่ฝฝไบ†ๅฎƒๅนถ่ฏ•ๅ›พ่ฎฉๅฎƒ็›ดๆŽฅๅทฅไฝœ๏ผŒไฝ†ๅฎƒไป็„ถไธ่ตทไฝœ็”จใ€‚็Ÿฅ้“ไป€ไนˆๆ—ถๅ€™ไผš่งฃๅ†ณ่ฟ™ไธช้—ฎ้ข˜ๅ—๏ผŸ ๆˆ‘ๆœ‰ๅŒๆ ท็š„้—ฎ้ข˜ใ€‚ๆˆ‘ๅ•็‹ฌไธ‹่ฝฝไบ†ๅฎƒๅนถ่ฏ•ๅ›พ่ฎฉๅฎƒ็›ดๆŽฅๅทฅไฝœ๏ผŒไฝ†ๅฎƒไป็„ถไธ่ตทไฝœ็”จใ€‚็Ÿฅ้“ไป€ไนˆๆ—ถๅ€™ไผš่งฃๅ†ณ่ฟ™ไธช้—ฎ้ข˜ๅ—๏ผŸ

me too

ArthurZucker commented 1 year ago

are you sure that you are on the latest release of transformers? pip install --upgrade transformers

4drawing95 commented 1 year ago

are you sure that you are on the latest release of transformers? pip install --upgrade transformers

Wow, I had forgotten about this, but after trying it, I ran it and it works fine, thank you very much.