huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
132.43k stars 26.37k forks source link

AttributeError: 'Catcher' object has no attribute 'self_attn' #29352 #29783

Closed andinus closed 5 months ago

andinus commented 6 months ago

System Info

Related: https://github.com/huggingface/transformers/issues/29352

Who can help?

No response

Information

Tasks

Reproduction

Same as https://github.com/huggingface/transformers/issues/29352

Expected behavior

Same as https://github.com/huggingface/transformers/issues/29352 (downgrading to 4.38.2 fixes this)

Screenshot 2024-03-21 at 9 30 16 PM

amyeroberts commented 6 months ago

Hi @andinus, thanks for raising an issue!

Could you:

cc @ArthurZucker as it seems like a possible regression cc @younesbelkada as it seems possibly quantization related

ArthurZucker commented 6 months ago

It's not really a regression, as I mentioned on the other PR, autoawq removes self_attn modules entirely. We don't expect this. Let's open the issue in AWQ we complied with it last time as the release was coming, but long term they are breaking the api!

andinus commented 6 months ago
* provide a minimal code snippet to reproduce the error?

* share the full traceback as text, rather than a screenshot? This makes the errors searchable and enables us to more easily debug as we can copy-paste segments.

Hello, I'm very sorry, I won't be able to provide these immediately.

OCR of the traceback
Exception: 'Catcher' object has no attribute 'self_attn
Traceback (most recent call last):
  File "/root/qex/framework/run.py", line 318, in child_process
    Generator( input_queue, output_queue ).run()
  File "/root/qex/franework/run.py", line 284, in run
    self .quantize()
  File "/root/qex/framework/run.py", line 189, in quantize
    self. finetuningmodel_engine.quantize()
  File "/root/qex/framework/engine_vilm.py", line 129, in quantize
    model.quantize( tokenizer, quant_config=quant_config )
  File "/usr/local/lib/python3.1@/dist-packages/torch/utils/_contextlib.py", line 115, in decorate.context
    return func(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/awq/models/base.py", line 161, in quantize
    self .quantizer = AwqQuantizer(
  File "/usr/local/lib/python3.16/dist-packages/awq/quantize/quantizer.py", line 59, in __init__
    self.modules, self.module_kwargs, self.inps = self.init_quant()
  File "/usr/local/1ib/python3.18/dist-packages/awq/quantize/quantizer.py", line 478, in init_quant
    self .nodel (samples. to(next(self .model .paraneters()) device)
  File "/usr/local/1ib/python3.18/dist-packages/torch/nn/modules/module.py", line 1518, in wrapped_call_inp]
    return self..call_impl(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/torch/nn/modules/module.py", line 1527, in .call_inp]
    return forward_call(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/accelerate/hooks.py", line 166, in new_forward
    output = module..old_forward(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/transformers/nodels/llama/nodeling_llana.py", line 1196, in forward
    outputs = self.nodel(
  File "/usr/local/1ib/python3.18/dist-packages/torch/nn/modules/module.py", line 1518, in .wrapped.call_inp]
    return self..call_impl(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/torch/nn/modules/module. py", Line 1527, in -call_impl
    return forvard_call(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/transformers/nodels/llama/nodeling_llana.py", line 998, in forward
    causal_nask = self._update_causal_nask(attention_mask, inputs_embeds, cache_position)
  File "/usr/local/1ib/python3.10/dist-packages/transformers/nodels/1lana/nodeling_llana.py", line 1867, in _update_causal_mask
    if hasattr(self.layers[@].self_attn, "past_key_value"): # static cache
  File "/usr/local/1ib/python3.18/dist-packages/torch/nn/modules/module.py", line 1695, in __getattr__
    raise AttributeError(f"'{type(self).._name__}' object has no attribute '{nane}'")
AttributeError: 'Catcher' object has no attribute 'self_attn'
ArthurZucker commented 6 months ago

cc @casper-hansen is this what you mentioned in your tweet about breaking change?

casper-hansen commented 6 months ago

Hi @ArthurZucker, yes this is one of the issues. I have released 0.2.4 which has pinned transformers<=4.38.2 as a temporary fix for quantization and inference. On the inference issue, I am not sure how to patch it without replacing the whole LlamaForCausalLM which is a big task.

This kind of pattern of accessing modules will break most (if not all) packages that try to utilize transformers to patch/optimize certain parts of the model. I would recommend creating some abstractions that avoid such direct access to modules. https://github.com/huggingface/transformers/blob/76a33a10923ccc1074917f6b6a1e719e626b7dc9/src/transformers/models/llama/modeling_llama.py#L1243

Reference: I fixed the quantization issue, but there was another issue with inference following quantization that I did not have time to resolve. https://github.com/casper-hansen/AutoAWQ/issues/407#issuecomment-2016779419

ArthurZucker commented 6 months ago

I'll have a look. We can fix this as well on our side, it's just a bit hard for us to assume that some modules will be removed 😓 but sorry anyway, should not have happened.

We can make another patch to fix both issue given the huge user base of AWQ it makes sense!

casper-hansen commented 6 months ago

Thanks @ArthurZucker, I appreciate collaboration here to make the best of quantized models. At present time, I will not be able to provide support for quantizing newer models (e.g. QWen2MoE) due to these breaking changes.

Do you have an idea of when a fix could be implemented?

ArthurZucker commented 6 months ago

In around 12h I'll do a fix + a patch with #29895

ANBAYM commented 5 months ago

In around 12h I'll do a fix + a patch with #29895

Hi! I also meet the same issue when using awq to quantize the gemma model. Please let me know when you release the usable version! Thanks for your help.

TechxGenus commented 5 months ago

This issue seems to still be unresolved. Inference for the AWQ model is now back to normal, but errors still occur when trying to quantify the Llama or Gemma models.