huggingface / optimum-intel

🤗 Optimum Intel: Accelerate inference with Intel optimization tools
https://huggingface.co/docs/optimum/main/en/intel/index
Apache License 2.0
409 stars 112 forks source link

Restore SDPA in Gemma2 models for transformers > 4.45 #976

Closed eaidova closed 3 weeks ago

eaidova commented 4 weeks ago

What does this PR do?

transformers 4.45 introduces perf regression for gemma2 models due to switching default realization from sdpa to eager in this commit: https://github.com/huggingface/transformers/commit/975b988bfe6e7ebb47390cd9a1556c6888804883

benchmarking results for CPU:

model precision input prompt len 2sd token latency (ms) without sdpa 2nd token latency (ms) with sdpa
gemma-2-2b FP16 32 37.9 26.7
gemma-2-2b FP16 1024 106.7 27.3
gemma-2-9b FP16 32 112.4 82.3
gemma-2-9b FP16 1024 310.2 83.7
gemma-2-2b INT8 32 31.9 20.9
gemma-2-2b INT8 1024 111.3 21.3
gemma-2-9b INT8 32 82.9 53.4
gemma-2-9b INT8 1024 301.9 55.5

Before submitting

HuggingFaceDocBuilderDev commented 4 weeks ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

IlyasMoutawwakil commented 3 weeks ago

what about the issue https://github.com/huggingface/transformers/issues/32848 which is the reason behind the default implementation change. Pretty sure it's a modeling incompatibility but can you make sure the snippet in the issue returns the correct outputs with sdpa+ov_model ?

eaidova commented 3 weeks ago

@IlyasMoutawwakil , I tested gemma-2-2b-it, output seems correct:

['Let\'s count them! \n\nYou\'ve used the word "laugh" one times in your question. 😊 \n', 'There are two "laugh" words in your sentence. 😊 \n', 'There are three "laugh" words in the phrase "laugh laugh laugh". \n', 'There are four "laugh" words in the phrase "laugh laugh laugh laugh". \n', 'There are 5 "laugh" words in the phrase "laugh laugh laugh laugh laugh". \n']

P.S. as I understand issue appears for bfloat16 execution on cuda, that is not our case (even loading bf16 model weights in ov we try to preserve fp32 accuracy level and fallback nodes to this precision if there is overflow), there is difference from original answer due to difference in used cache format, but it does not impact meaning and I receive the same out if modify cache class for transfroemrs too

IlyasMoutawwakil commented 3 weeks ago

thanks for investigating 👍