huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
128.71k stars 25.53k forks source link

AttributeError: 'str' object has no attribute 'shape' #31678

Open MARD1NO opened 1 week ago

MARD1NO commented 1 week ago

System Info

transformers version 4.42.1

Who can help?

No response

Information

Tasks

Reproduction

Run chatglm3 generation

Expected behavior

raise Error like:

    def get_masks(self, input_ids, past_key_values, padding_mask=None):
        batch_size, seq_length = input_ids.shape
        full_attention_mask = torch.ones(batch_size, seq_length, seq_length, device=input_ids.device)
        full_attention_mask.tril_()
        past_length = 0
        if past_key_values:
>           past_length = past_key_values[0][0].shape[0]
E           AttributeError: 'str' object has no attribute 'shape'

../../../.cache/huggingface/modules/transformers_modules/chatglm3-6b/modeling_chatglm.py:683: AttributeError
MARD1NO commented 1 week ago

I think it may be related to https://github.com/huggingface/transformers/pull/31679

amyeroberts commented 1 week ago

Hi @MARD1NO, thanks for opening a PR!

So that we can best help you, could you:

It does look like the error is similar to the one in #31679. As the code in the description looks like it's custom, rather than from the transformers library, that code might need to be updated to handle this

cc @gante @zucchini-nlp

MARD1NO commented 1 week ago

Hi @MARD1NO, thanks for opening a PR!

So that we can best help you, could you:

  • Share the full running env: run transformers-cli env in the terminal and copy-paste the output
  • Share a minimal code snippet to reproduce the error

It does look like the error is similar to the one in #31679. As the code in the description looks like it's custom, rather than from the transformers library, that code might need to be updated to handle this

cc @gante @zucchini-nlp

Hi @amyeroberts, thanks for your quick reply :D

env is:

- `transformers` version: 4.42.1
- Platform: Linux-5.4.0-176-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config:    not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3090

The minimal code snippet is when using chatglm3 to generate like that:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm3-6b", padding_side="left", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("THUDM/chatglm3-6b", device_map="auto", trust_remote_code=True)
model = model.eval()

prompts = ["hello, how are you?", "Who are you?"]

inputs = tokenizer(prompts, padding=True, return_tensors='pt')
inputs = inputs.to(model.device)
pred = model.generate(**inputs, 
                      max_new_tokens=128,
                      do_sample=False,
                      repetition_penalty=1.0)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))

And I test success in transformers==4.40.1, I thinks there exist some bug

amyeroberts commented 6 days ago

Hi @MARD1NO, thanks for sharing!

As the modeling code is defined in https://huggingface.co/THUDM/chatglm3-6b/blob/main/modeling_chatglm.py, I'd suggest opening a discussion on the THUDM/chatglm3-6b repo to report this error