Closed NanoCode012 closed 1 year ago
That's completely right! The config.model_type
should be nllb-moe
instead of nllb_moe
. Will modify this in the checkpoints and in the code. Thanks for reporting!
@ArthurZucker , hello!
I noticed that and have also attempted that, but I got the same error weirdly. I will try it again later.
It is the config.json right?
Yes the config.json
was wrong!
Hello @ArthurZucker , sorry for bothering you again.
I have git pull
the latest Huggingface repo and still got same error.
>>> tokenizer = AutoTokenizer.from_pretrained("../hub/nllb-moe-54b", use_auth_token=True, src_lang="eng_Latn")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("../hub/nllb-moe-54b")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 920, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 626, in __getitem__
raise KeyError(key)
KeyError: 'nllb-moe'
Do I need to install from your branch https://github.com/huggingface/transformers/pull/22470?
Edit: Oh, it was just merged 1 min ago.
This is normal! You need to update your config.json file
If you were using a hub model, it would automatically update. The PR fixes the default value but for models that were already downloaded you need to update the config
If you were using a hub model, it would automatically update. The PR fixes the default value but for models that were already downloaded you need to update the config
Yes, I tried both 1) Updated config.json 2) git pull the downloaded HF repo with the model https://huggingface.co/facebook/nllb-moe-54b/commit/83c96e4658a2e02c182d0ab794229301862791ee (not the transformers).
I'm not sure if it cached the config.json somewhere?
Edit: Will pip install latest transformer from source.
Hm, I have pip install from source and also confirmed that config.json
got updated.
Unpacking objects: 100% (3/3), 342 bytes | 0 bytes/s, done.
From https://huggingface.co/facebook/nllb-moe-54b
59fc265..83c96e4 main -> origin/main
Updating 59fc265..83c96e4
Fast-forward
config.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Hello @ArthurZucker , sorry for bothering you again.
I have
git pull
the latest Huggingface repo and still got same error.>>> tokenizer = AutoTokenizer.from_pretrained("../hub/nllb-moe-54b", use_auth_token=True, src_lang="eng_Latn") >>> model = AutoModelForSeq2SeqLM.from_pretrained("../hub/nllb-moe-54b") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 920, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 626, in __getitem__ raise KeyError(key) KeyError: 'nllb-moe'
Do I need to install from your branch #22470?
Edit: Oh, it was just merged 1 min ago.
I just saw. The key error is now nllb-moe
. It is not the same error as the first post which was nllb_moe
.
Okay, let me have another look!
Okay, let me have another look!
Sorry for disturbing. Thank you very much!
So, running this model = AutoModelForSeq2SeqLM.from_pretrained("hf-internal-testing/random-nllb-moe-2-experts")
definitely worked for me.
In [3]: model = AutoModelForSeq2SeqLM.from_pretrained("hf-internal-testing/random-nllb-moe-2-experts")
Downloading (โฆ)lve/main/config.json: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1.40k/1.40k [00:00<00:00, 272kB/s]
Downloading (โฆ)model.bin.index.json: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 91.5k/91.5k [00:00<00:00, 992kB/s]
Downloading (โฆ)00001-of-00002.bin";: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 7.75G/7.75G [02:04<00:00, 62.0MB/s]
Downloading (โฆ)00002-of-00002.bin";: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 9.36G/9.36G [02:17<00:00, 68.0MB/s]
Downloading shards: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [04:23<00:00, 131.96s/it]
Loading checkpoint shards: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:11<00:00, 5.82s/it]
In [4]:
The issue is most probably related to the config/ the cache! But still will look into it. In the mean time use the model directly ๐
Hello @ArthurZucker , thank you for info!
Hello ไฝ ๅฅฝไฝ ๅฅฝ@ArthurZucker , thank you for info!๏ผ่ฐข่ฐขไฝ ็ไฟกๆฏ๏ผ๏ผ่ฐข่ฐขไฝ ็ไฟกๆฏ๏ผ
Is the problem solved?
Hello ไฝ ๅฅฝไฝ ๅฅฝ@ArthurZucker , thank you for info!๏ผ่ฐข่ฐขไฝ ็ไฟกๆฏ๏ผ๏ผ่ฐข่ฐขไฝ ็ไฟกๆฏ๏ผ
Is the problem solved?
Hey! I have not tried this yet. I think it could've been fixed. I probably had some caching issue with packages.
I have not been recently able to get a machine to run this yet.
Hello ไฝ ๅฅฝไฝ ๅฅฝ@ArthurZucker , thank you for info!๏ผ่ฐข่ฐขไฝ ็ไฟกๆฏ๏ผ๏ผ่ฐข่ฐขไฝ ็ไฟกๆฏ๏ผ
Is the problem solved?
Hey! I have not tried this yet. I think it could've been fixed. I probably had some caching issue with packages.
I have not been recently able to get a machine to run this yet.
I have the same problem, I think changing the config file "nllb-moe" is not the solution, I tried many times, I am not cached, the first time I use
Hey! Really sorry but I can't reproduce this now : https://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharing
This notebook works as expected out of the box. I am pretty sur it is either:
main
transformers branchHey! Really sorry but I can't reproduce this now : ๅฟ๏ผ็็ๅพๆฑๆญ๏ผไฝๆ็ฐๅจๆ ๆณ้็ฐ่ฟไธช๏ผๅฟ๏ผ็็ๅพๆฑๆญ๏ผไฝๆ็ฐๅจๆ ๆณ้็ฐ่ฟไธช๏ผhttps://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharinghttps://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharinghttps://colab.research.google.com/drive/1uoAKGbkJA4rnZV9Lwg1unOvvEloudcvM?usp=sharing
This notebook works as expected out of the box.ๆญค็ฌ่ฎฐๆฌๅผ็ฎฑๅณ็จ๏ผๆ้ขๆๅทฅไฝใๆญค็ฌ่ฎฐๆฌๅผ็ฎฑๅณ็จ๏ผๆ้ขๆๅทฅไฝใ I am pretty sur it is either: ๆๅพ็กฎๅฎๅฎๆฏ๏ผ ๆๅพ็กฎๅฎๅฎๆฏ๏ผ
- you are not using the ๆจๆฒกๆไฝฟ็จ ๆจๆฒกๆไฝฟ็จ
main
transformers branch ๅๅๅจๅๆฏ ๅๅๅจๅๆฏ- your file is not well definedๆจ็ๆไปถๆชๆ็กฎๅฎไนๆจ็ๆไปถๆชๆ็กฎๅฎไน
Thanks, I'm tryingใI see that your model is "hf-internal-testing/random-nllb-moe-2-experts" ใCan you try the "facebook/nllb-moe-54b" model?
Just did, it works the same
OK๏ผThanks, I'm trying
I have the same problem. I downloaded it separately and tried to make it work directly, but it still didn't work. Any idea when this will be fixed?
I have the same problem. I downloaded it separately and tried to make it work directly, but it still didn't work. Any idea when this will be fixed? ๆๆๅๆ ท็้ฎ้ขใๆๅ็ฌไธ่ฝฝไบๅฎๅนถ่ฏๅพ่ฎฉๅฎ็ดๆฅๅทฅไฝ๏ผไฝๅฎไป็ถไธ่ตทไฝ็จใ็ฅ้ไปไนๆถๅไผ่งฃๅณ่ฟไธช้ฎ้ขๅ๏ผ ๆๆๅๆ ท็้ฎ้ขใๆๅ็ฌไธ่ฝฝไบๅฎๅนถ่ฏๅพ่ฎฉๅฎ็ดๆฅๅทฅไฝ๏ผไฝๅฎไป็ถไธ่ตทไฝ็จใ็ฅ้ไปไนๆถๅไผ่งฃๅณ่ฟไธช้ฎ้ขๅ๏ผ
me too
are you sure that you are on the latest release of transformers?
pip install --upgrade transformers
are you sure that you are on the latest release of transformers?
pip install --upgrade transformers
Wow, I had forgotten about this, but after trying it, I ran it and it works fine, thank you very much.
System Info
transformers
version: 4.28.0.dev0Who can help?
@ArthurZucker from https://github.com/huggingface/transformers/pull/22024
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Following example script on https://huggingface.co/facebook/nllb-moe-54b (but pointing to local git copy),
pip install git+https://github.com/huggingface/transformers.git
python
Note: The system might not have enough RAM, but this errored immediately after reaching it and does not seem like OOM.
Expected behavior
It can load model.