huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.35k stars 26.86k forks source link

Failed to import transformers.models.transfo_xl.configuration_transfo_xl #28446

Closed andysingal closed 8 months ago

andysingal commented 9 months ago

System Info

Colab Notebook

Who can help?

@ArthurZucker @pacman100

Information

Tasks

Reproduction

model = AutoModelForSequenceClassification.from_pretrained(
    TEACHER_MODEL,
    problem_type="multi_label_classification", 
    num_labels=len(unique_labels),
    id2label=id2label,
    label2id=label2id
)

ERROR:

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in _get_module(self, module_name)
   1352         self._objects = {} if extra_objects is None else extra_objects
-> 1353         self._name = name
   1354         self._import_structure = import_structure

11 frames
[/usr/lib/python3.10/importlib/__init__.py](https://localhost:8080/#) in import_module(name, package)
    125             level += 1
--> 126     return _bootstrap._gcd_import(name[level:], package, level)
    127 

/usr/lib/python3.10/importlib/_bootstrap.py in _gcd_import(name, package, level)

/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load(name, import_)

/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)

ModuleNotFoundError: No module named 'transformers.models.transfo_xl.configuration_transfo_xl'

The above exception was the direct cause of the following exception:

RuntimeError                              Traceback (most recent call last)
[<ipython-input-24-49d540f006ea>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model = AutoModel.from_pretrained(
      2     TEACHER_MODEL,
      3     problem_type="multi_label_classification",
      4     num_labels=len(unique_labels),
      5     id2label=id2label,

[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
    541 
    542         has_remote_code = hasattr(config, "auto_map") and cls.__name__ in config.auto_map
--> 543         has_local_code = type(config) in cls._model_mapping.keys()
    544         trust_remote_code = resolve_trust_remote_code(
    545             trust_remote_code, pretrained_model_name_or_path, has_local_code, has_remote_code

[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in keys(self)
    755 
    756     def keys(self):
--> 757         mapping_keys = [
    758             self._load_attr_from_module(key, name)
    759             for key, name in self._config_mapping.items()

[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in <listcomp>(.0)
    756     def keys(self):
    757         mapping_keys = [
--> 758             self._load_attr_from_module(key, name)
    759             for key, name in self._config_mapping.items()
    760             if key in self._model_mapping.keys()

[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in _load_attr_from_module(self, model_type, attr)
    752         if module_name not in self._modules:
    753             self._modules[module_name] = importlib.import_module(f".{module_name}", "transformers.models")
--> 754         return getattribute_from_module(self._modules[module_name], attr)
    755 
    756     def keys(self):

[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in getattribute_from_module(module, attr)
    696     if isinstance(attr, tuple):
    697         return tuple(getattribute_from_module(module, a) for a in attr)
--> 698     if hasattr(module, attr):
    699         return getattr(module, attr)
    700     # Some of the mappings have entries model_type -> object of another model type. In that case we try to grab the

[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in __getattr__(self, name)
   1341         super().__init__(name)
   1342         self._modules = set(import_structure.keys())
-> 1343         self._class_to_module = {}
   1344         for key, values in import_structure.items():
   1345             for value in values:

[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in _get_module(self, module_name)
   1353         self._name = name
   1354         self._import_structure = import_structure
-> 1355 
   1356     # Needed for autocompletion in an IDE
   1357     def __dir__(self):

RuntimeError: Failed to import transformers.models.transfo_xl.configuration_transfo_xl because of the following error (look up to see its traceback):
No module named 'transformers.models.transfo_xl.configuration_transfo_xl'

Expected behavior

run smoothly

ArthurZucker commented 9 months ago

TransfoXL was deprecated and is now in the legacy folder see /transformers/src/transformers/models/deprecated/transfo_xl as it is no longer maintained

andysingal commented 9 months ago

TransfoXL was deprecated and is now in the legacy folder see /transformers/src/transformers/models/deprecated/transfo_xl as it is no longer maintained

Any workaround for above code to run?

RichardAragon commented 9 months ago

Just here because I am having this same issue right now :(

andysingal commented 9 months ago

Same here, i just did from transformers import Train

On Fri, Jan 12, 2024 at 9:07 AM RichardAragon @.***> wrote:

Just here because I am having this same issue right now :(

— Reply to this email directly, view it on GitHub https://github.com/huggingface/transformers/issues/28446#issuecomment-1888382637, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4LJNKV23HELVQMCBAHRILYOCVXZAVCNFSM6AAAAABBWGNETKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBYGM4DENRTG4 . You are receiving this because you authored the thread.Message ID: @.***>

ArthurZucker commented 9 months ago

from transformers import TransfoXLForSequenceClassification should help you. cc @ydshieh this is a regression and should throw a deprecated warning not an error! Can you have a look as you did the deprecation cycle!

ydshieh commented 9 months ago

OK, taking look into this

ydshieh commented 9 months ago

@andysingal

I am running

ckpt = "transfo-xl-wt103"

from transformers import AutoModelForSequenceClassification

model = AutoModelForSequenceClassification.from_pretrained(ckpt)

and it works.

Could you share the colab that can produce the issue?

sanalsprasad commented 9 months ago

I got the same error after I upgraded the transformers package. If you are downloading the files from a hugging face repo, can you try removing the local model cache files, and redownload them? That worked for me.

nandakishorebellammuralidhar commented 9 months ago

following this issue...

ydshieh commented 9 months ago

@andysingal @sanalsprasad @nandakishorebellammuralidhar

As mentioned, I tried on colab and I am not able to reproduce the error.

Could you provide your system information by running transformers-cli env (command) as well as a code snippet. Or you can try to reproduce it on colab.

Otherwise, I'm afraid that I won't be able to help on this.

RichardAragon commented 9 months ago

I tried the same processes again yesterday or the day before that threw this error for me before. I did get the configuration error again, I had to uninstall transformers and install huggingface transformers, that fixed it this time.

The use case was that I was trying to find tune an already quantized model. It was a model I had already fine tuned, and I wanted to find tune it again. If memory serves, that is the issue that brought me here too. I think then I was attempting to merge two already merged models via mergekit.

On Mon, Jan 22, 2024, 6:26 AM Yih-Dar @.***> wrote:

@andysingal https://github.com/andysingal @sanalsprasad https://github.com/sanalsprasad @nandakishorebellammuralidhar https://github.com/nandakishorebellammuralidhar

As mentioned, I tried on colab and I am not able to reproduce the error.

Could you provide your system information by running transformers-cli env (command) as well as a code snippet. Or you can try to reproduce it on colab.

Otherwise, I'm afraid that I won't be able to help on this.

— Reply to this email directly, view it on GitHub https://github.com/huggingface/transformers/issues/28446#issuecomment-1904115533, or unsubscribe https://github.com/notifications/unsubscribe-auth/BA44S7PLUJZ3YNDOD6MUCRLYPZZKPAVCNFSM6AAAAABBWGNETKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBUGEYTKNJTGM . You are receiving this because you commented.Message ID: @.***>

kasiwoos commented 9 months ago

transformers.models.transfo_xl.configuration_transfo_xl is deprecated from transformers v.4.36 so install version 4.35 !pip install -q -U git+https://github.com/huggingface/transformers.git@v4.35-release and restart colab kernel.

ydshieh commented 9 months ago

@kasiwoos it is deprecated, but it will continue to work. We just don't run any test against this model anymore and it won't be maintained.

But I can't reproduce the issue people reported here.

patruff commented 9 months ago

Same issue for me, @kasiwoos fix worked. To reiterate the issue for me was if you are loading a fine tuned llama 2 8bit quantized from 2 weeks ago it won't work with the latest transformers release.

ydshieh commented 9 months ago

@patruff Could you give more details on how to reproduce, please. That would be really helpful.

patruff commented 9 months ago

@patruff Could you give more details on how to reproduce, please. That would be really helpful.

Sure, run this on a T4 in Colab with the latest transformers

name can be any 8bit model

name='patruff/chucklesEFT1'

from transformers import AutoModelForCausalLM, AutoTokenizer import torch

model_8bit = AutoModelForCausalLM.from_pretrained(name, device_map="auto", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained(name)

ydshieh commented 9 months ago

@patruff First thanks for sharing. I am still not able to reproduce however.

name='patruff/chucklesEFT1' is a dataset, so I changed it to name='patruff/toxic-llama2-7b-tuneEFT1'.

On colab, it works (even if I upgrade transformers to v4.37).

github-actions[bot] commented 8 months ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

ilyankou commented 7 months ago

Got the same issue when loading Mistral-7B-Instruct-v0.2:

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")

Went through the following steps (Mac) and got it fixed:

  1. Updated the transformers library: pip install transformers -U
  2. Removed everything in cache: rm -rf ~/.cache/huggingface
  3. Ran transformers-cli env and got the following message:

    The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()