Closed andysingal closed 8 months ago
TransfoXL was deprecated and is now in the legacy folder see /transformers/src/transformers/models/deprecated/transfo_xl
as it is no longer maintained
TransfoXL was deprecated and is now in the legacy folder see
/transformers/src/transformers/models/deprecated/transfo_xl
as it is no longer maintained
Any workaround for above code to run?
Just here because I am having this same issue right now :(
Same here, i just did from transformers import Train
On Fri, Jan 12, 2024 at 9:07 AM RichardAragon @.***> wrote:
Just here because I am having this same issue right now :(
— Reply to this email directly, view it on GitHub https://github.com/huggingface/transformers/issues/28446#issuecomment-1888382637, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4LJNKV23HELVQMCBAHRILYOCVXZAVCNFSM6AAAAABBWGNETKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBYGM4DENRTG4 . You are receiving this because you authored the thread.Message ID: @.***>
from transformers import TransfoXLForSequenceClassification
should help you.
cc @ydshieh this is a regression and should throw a deprecated warning not an error! Can you have a look as you did the deprecation cycle!
OK, taking look into this
@andysingal
I am running
ckpt = "transfo-xl-wt103"
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained(ckpt)
and it works.
Could you share the colab that can produce the issue?
I got the same error after I upgraded the transformers package. If you are downloading the files from a hugging face repo, can you try removing the local model cache files, and redownload them? That worked for me.
following this issue...
@andysingal @sanalsprasad @nandakishorebellammuralidhar
As mentioned, I tried on colab and I am not able to reproduce the error.
Could you provide your system information by running transformers-cli env
(command) as well as a code snippet.
Or you can try to reproduce it on colab.
Otherwise, I'm afraid that I won't be able to help on this.
I tried the same processes again yesterday or the day before that threw this error for me before. I did get the configuration error again, I had to uninstall transformers and install huggingface transformers, that fixed it this time.
The use case was that I was trying to find tune an already quantized model. It was a model I had already fine tuned, and I wanted to find tune it again. If memory serves, that is the issue that brought me here too. I think then I was attempting to merge two already merged models via mergekit.
On Mon, Jan 22, 2024, 6:26 AM Yih-Dar @.***> wrote:
@andysingal https://github.com/andysingal @sanalsprasad https://github.com/sanalsprasad @nandakishorebellammuralidhar https://github.com/nandakishorebellammuralidhar
As mentioned, I tried on colab and I am not able to reproduce the error.
Could you provide your system information by running transformers-cli env (command) as well as a code snippet. Or you can try to reproduce it on colab.
Otherwise, I'm afraid that I won't be able to help on this.
— Reply to this email directly, view it on GitHub https://github.com/huggingface/transformers/issues/28446#issuecomment-1904115533, or unsubscribe https://github.com/notifications/unsubscribe-auth/BA44S7PLUJZ3YNDOD6MUCRLYPZZKPAVCNFSM6AAAAABBWGNETKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBUGEYTKNJTGM . You are receiving this because you commented.Message ID: @.***>
transformers.models.transfo_xl.configuration_transfo_xl is deprecated from transformers v.4.36 so install version 4.35 !pip install -q -U git+https://github.com/huggingface/transformers.git@v4.35-release and restart colab kernel.
@kasiwoos it is deprecated, but it will continue to work. We just don't run any test against this model anymore and it won't be maintained.
But I can't reproduce the issue people reported here.
Same issue for me, @kasiwoos fix worked. To reiterate the issue for me was if you are loading a fine tuned llama 2 8bit quantized from 2 weeks ago it won't work with the latest transformers release.
@patruff Could you give more details on how to reproduce, please. That would be really helpful.
@patruff Could you give more details on how to reproduce, please. That would be really helpful.
Sure, run this on a T4 in Colab with the latest transformers
name='patruff/chucklesEFT1'
from transformers import AutoModelForCausalLM, AutoTokenizer import torch
model_8bit = AutoModelForCausalLM.from_pretrained(name, device_map="auto", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained(name)
@patruff First thanks for sharing. I am still not able to reproduce however.
name='patruff/chucklesEFT1'
is a dataset, so I changed it to name='patruff/toxic-llama2-7b-tuneEFT1'
.
On colab, it works (even if I upgrade transformers
to v4.37).
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Got the same issue when loading Mistral-7B-Instruct-v0.2:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
Went through the following steps (Mac) and got it fixed:
pip install transformers -U
rm -rf ~/.cache/huggingface
transformers-cli env
and got the following message:
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()
System Info
Colab Notebook
Who can help?
@ArthurZucker @pacman100
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
ERROR:
Expected behavior
run smoothly