Open djannot opened 2 months ago
the same issue as mine.
Oh it's best to update transformers via pip install --upgrade "transformers>=4.45"
Thanks @danielhanchen for the fast response (as usual).
I did try this, but I now get another error:
Traceback (most recent call last):
File "/home/denis/Documents/ai/unsloth/llama3-chat-template.py", line 113, in <module>
trainer_stats = trainer.train()
File "<string>", line 145, in train
File "<string>", line 358, in _fast_inner_training_loop
File "/home/denis/miniconda3/envs/pytorch/lib/python3.10/site-packages/transformers/trainer.py", line 3477, in training_step
self.optimizer.train()
File "/home/denis/miniconda3/envs/pytorch/lib/python3.10/site-packages/accelerate/optimizer.py", line 128, in train
return self.optimizer.train()
AttributeError: 'AdamW' object has no attribute 'train'
Ok that's a weird error - are you using the notebooks we provided without any changes? It's possible HuggingFace's new update might have broken some parts
Yes, but I've just tried creating a new conda env and in that case it works.
So there was probably something weird going on with the upgrades of the different packages. Even if I still don't understand why it was working with the 1B model.
Anyway, you can close the issue. And thanks again for the replies.
Yes, but I've just tried creating a new conda env and in that case it works.
This worked for me too 😸
but when inference, it occurs that 'ValueError: Invalid cache_implementation
(dynamic). Choose one of: ['static', 'offloaded_static', 'sliding_window', 'hybrid', 'mamba', 'quantized', 'static']'
but when inference, it occurs that 'ValueError: Invalid
cache_implementation
(dynamic). Choose one of: ['static', 'offloaded_static', 'sliding_window', 'hybrid', 'mamba', 'quantized', 'static']'
this error has been fixed by upgrading the unsloth to version 2024.9.post3 and transformers to version 4.45.0
Thanks @danielhanchen for the fast response (as usual).
I did try this, but I now get another error:
Traceback (most recent call last): File "/home/denis/Documents/ai/unsloth/llama3-chat-template.py", line 113, in <module> trainer_stats = trainer.train() File "<string>", line 145, in train File "<string>", line 358, in _fast_inner_training_loop File "/home/denis/miniconda3/envs/pytorch/lib/python3.10/site-packages/transformers/trainer.py", line 3477, in training_step self.optimizer.train() File "/home/denis/miniconda3/envs/pytorch/lib/python3.10/site-packages/accelerate/optimizer.py", line 128, in train return self.optimizer.train() AttributeError: 'AdamW' object has no attribute 'train'
Upgrading accelerate
to version 0.34.0 will resolve this issue.
I'm running into this same error when trying to quantize the trained models into gguf format
Exception: data did not match any variant of untagged enum ModelWrapper at line 1251003 column 3
Edit: The tokenizer unsloth exports is broken.
I am running into this same error as well when merging and exporting the 16bit model and using it on vllm. I have tried multiple models and the error is consistent. Most definitely the tokenizer exporter is broken Edit: by using latest version of docker image form vllm it now works (v0.6.2)
I encountered the same issue as @selectorseb while deploying a finetuned Llama-3.2 model using vLLM with Docker. Initially, I faced the same problem mentioned in the original post @djannot, but after updating the vLLM Docker image, the issue was resolved.
@KaiDF Apologies forgot to mention for you to update Unsloth!! Glad it works now! Sorry on the issue!
@mf-skjung I'll actually edit pyproject.toml
to log this - thanks!
On the rest of the issues - so the solution seems to update vllm>=0.6.2
? Ie pip install --upgrade "vllm>=0.6.2"
I am running a notebook on google collaborator and still have this issue. I am trying to read a checkpoint from a LLAMA model fine tuned with LoRa. Yesterday, it worked fine, but today that changed. If I update to transformers 4.45 I receive another error. (invalid repository id)
@riddle-today Apologies sorry - can you screenshot the error - the picture you provided is just a warning - you can ignore that!
It is the same error as @djannot . The picture before was to show the version of transformers, unsloth and xformers I am using. Thank you so much for the prompt answer @danielhanchen .
If I go and download the tokenizer files from the HuggingFace repository and replace them, it works.
Updating tokenizers
to latest 0.20.0 might help
@teamclouday Oh wait try not to update it to 0.20!! Transformers will error out!!
@riddle-today Oh yep apologies I forgot to mention you have to override the tokenizer with the latest one I uploaded!
If I go and download the tokenizer files from the HuggingFace repository and replace them, it works.
This resolves Exception: data did not match any variant of untagged enum ModelWrapper ...
for me, too! It seems like some saving error?
@tongyx361 Apologies on the delay - ye the new transformers update broke saving - so you need overwrite the old tokenizer file up redownloading them
Can somebody list down the steps to override the tokenizer file. I am new to this. Need Help!
Can somebody list down the steps to override the tokenizer file. I am new to this. Need Help!
Form my understand is
But i still stuck with other issue after that so can't confirm.
I am still facing this issue, I have the latest "2024.10.7" version but unsloth requires transformers < 4.45, but it is not working when I take transformers < 4.45 getting same error
@katopz @srsugandh Can you guys ask this on our Discord - probably a better place to get this resolved
I am still facing this issue, I have the latest "2024.10.7" version but unsloth requires transformers < 4.45, but it is not working when I take transformers < 4.45 getting same error
@katopz @srsugandh Can you guys ask this on our Discord - probably a better place to get this resolved
@katopz @danielhanchen @srsugandh - same problem here. Unsloth requires transformers < 4.45, but that doesn't work. So should we manually install a higher version of transformers to fix this issue?
@danielhanchen @katopz - here is a notebook for "offline" installation on Kaggle: (https://www.kaggle.com/code/kolyan1/offline-unsloth-package-installation-pt-2-working)
pip3 install unsloth==2024.10.4 torch==2.4.1
pip3 install transformers==4.45.2
I am still facing this issue, I have the latest "2024.10.7" version but unsloth requires transformers < 4.45, but it is not working when I take transformers < 4.45 getting same error
@katopz @srsugandh Can you guys ask this on our Discord - probably a better place to get this resolved
@katopz @danielhanchen @srsugandh - same problem here. Unsloth requires transformers < 4.45, but that doesn't work. So should we manually install a higher version of transformers to fix this issue?
I found a work around. I did pip install to get the latest version of unsloth then uninstalled it and then used the github commit to install the unsloth (pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"). This is because just using the commit does not install the related libraries and then I installed the transformer with version 4.45.1 and it worked.
I get this error:
It works with
nsloth/Llama-3.2-1B-Instruct-bnb-4bit