intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.75k stars 1.27k forks source link

[All-in-one benchmark] [GPT2-large] The size of tensor a (1024) must match the size of tensor b (1025) at non-singleton dimension 3 #11951

Open Kpeacef opened 3 months ago

Kpeacef commented 3 months ago

Hi I am trying to benchmark GPT2-large and experienced RuntimeError: The size of tensor a (1024) must match the size of tensor b (1025) at non-singleton dimension 3.

The inputs should able to accept up to 1024 consecutive tokens. I have tried different in/out tokens and the max in/out pairs i tried is 512/512.

Inputs for 640 to 2048 will face this RuntimeError: The size of tensor a (1024) must match the size of tensor b (1025) at non-singleton dimension 3.

API used: transformer_int4_fp16_gpu & optimize_model_gpu

Model used: openai-community/gpt2-large

Versions: bigdl-core-xe-21 2.6.0b20240827

Thank you.

cranechu0131 commented 2 months ago

Hi Kpeacef, We have looked into this issue. We have tried running the GPT2-large model using only native transformers. The error is also reported under the input size you mentioned. So we suppose this issue is not introduced by ipex-llm and probably due to GPT2's incompatibility with the current version of transformers.