lyogavin / airllm

AirLLM 70B inference with single 4GB GPU
Apache License 2.0
5.12k stars 409 forks source link

ValueError: LlamaForCausalLM does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. #101

Open sleeper1023 opened 10 months ago

sleeper1023 commented 10 months ago

运行代码 airllm到 model = AirLLMLlama2("/home/user/models/Anima-7B-100K")这一句的时候,出现下面错误:

model = AirLLMLlama2("/home/user/models/Anima-7B-100K")
found index file...
found_layers:{'model.embed_tokens.': True, 'model.layers.0.': True, 'model.layers.1.': True, 'model.layers.2.': True, 'model.layers.3.': True, 'model.layers.4.': True, 'model.layers.5.': True, 'model.layers.6.': True, 'model.layers.7.': True, 'model.layers.8.': True, 'model.layers.9.': True, 'model.layers.10.': True, 'model.layers.11.': True, 'model.layers.12.': True, 'model.layers.13.': True, 'model.layers.14.': True, 'model.layers.15.': True, 'model.layers.16.': True, 'model.layers.17.': True, 'model.layers.18.': True, 'model.layers.19.': True, 'model.layers.20.': True, 'model.layers.21.': True, 'model.layers.22.': True, 'model.layers.23.': True, 'model.layers.24.': True, 'model.layers.25.': True, 'model.layers.26.': True, 'model.layers.27.': True, 'model.layers.28.': True, 'model.layers.29.': True, 'model.layers.30.': True, 'model.layers.31.': True, 'model.norm.': True, 'lm_head.': True}
saved layers already found in /home/user/models/Anima-7B-100K/splitted_model
>>>> Flash Attention installed
>>>> xentropy installed
>>>> Flash RoPE installed
new version of transfomer, no need to use BetterTransformer, try setting attn impl to sdpa...
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/user/anaconda3/envs/airllm/lib/python3.10/site-packages/airllm/airllm.py", line 9, in __init__
    super(AirLLMLlama2, self).__init__(*args, **kwargs)
  File "/home/user/anaconda3/envs/airllm/lib/python3.10/site-packages/airllm/airllm_base.py", line 127, in __init__
    self.init_model()
  File "/home/user/anaconda3/envs/airllm/lib/python3.10/site-packages/airllm/airllm_base.py", line 202, in init_model
    self.model = AutoModelForCausalLM.from_config(self.config, attn_implementation="sdpa", trust_remote_code=True)
  File "/home/user/anaconda3/envs/airllm/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 438, in from_config
    return model_class._from_config(config, **kwargs)
  File "/home/user/anaconda3/envs/airllm/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1261, in _from_config
    config = cls._autoset_attn_implementation(
  File "/home/user/anaconda3/envs/airllm/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1336, in _autoset_attn_implementation
    config = cls._check_and_enable_sdpa(
  File "/home/user/anaconda3/envs/airllm/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1490, in _check_and_enable_sdpa
    raise ValueError(
ValueError: LlamaForCausalLM does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new

我的环境配置:

CUDA = 11.8

Package                  Version
------------------------ ------------
accelerate               0.20.3
aiohttp                  3.9.1
aiosignal                1.3.1
airllm                   2.8.3
appdirs                  1.4.4
async-timeout            4.0.3
attrs                    23.2.0
bitsandbytes             0.39.0
certifi                  2023.11.17
charset-normalizer       3.3.2
click                    8.1.7
coloredlogs              15.0.1
datasets                 2.16.1
dill                     0.3.7
docker-pycreds           0.4.0
einops                   0.6.1
evaluate                 0.4.0
filelock                 3.13.1
flash-attn               2.4.2
frozenlist               1.4.1
fsspec                   2023.10.0
gitdb                    4.0.11
GitPython                3.1.40
huggingface-hub          0.20.2
humanfriendly            10.0
idna                     3.6
Jinja2                   3.1.2
joblib                   1.3.2
MarkupSafe               2.1.3
mpmath                   1.3.0
multidict                6.0.4
multiprocess             0.70.15
networkx                 3.2.1
ninja                    1.11.1.1
numpy                    1.26.3
nvidia-cublas-cu12       12.1.3.1
nvidia-cuda-cupti-cu12   12.1.105
nvidia-cuda-nvrtc-cu12   12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12        8.9.2.26
nvidia-cufft-cu12        11.0.2.54
nvidia-curand-cu12       10.3.2.106
nvidia-cusolver-cu12     11.4.5.107
nvidia-cusparse-cu12     12.1.0.106
nvidia-nccl-cu12         2.18.1
nvidia-nvjitlink-cu12    12.3.101
nvidia-nvtx-cu12         12.1.105
optimum                  1.16.1
packaging                23.2
pandas                   2.1.4
pathtools                0.1.2
peft                     0.3.0
pip                      23.3.1
protobuf                 4.25.1
psutil                   5.9.7
pyarrow                  14.0.2
pyarrow-hotfix           0.6
python-dateutil          2.8.2
pytz                     2023.3.post1
PyYAML                   6.0.1
regex                    2023.12.25
requests                 2.31.0
responses                0.18.0
safetensors              0.4.1
scikit-learn             1.2.2
scipy                    1.11.4
sentencepiece            0.1.99
sentry-sdk               1.39.1
setproctitle             1.3.3
setuptools               68.2.2
six                      1.16.0
smmap                    5.0.1
sympy                    1.12
threadpoolctl            3.2.0
tokenizers               0.15.0
torch                    2.1.2
tqdm                     4.66.1
transformers             4.37.0.dev0
triton                   2.1.0
typing_extensions        4.9.0
tzdata                   2023.4
urllib3                  2.1.0
wandb                    0.15.3
wheel                    0.41.2
xxhash                   3.4.1
yarl                     1.9.4
ivanbaldo commented 9 months ago

Does this work for you? https://github.com/lyogavin/Anima/issues/107#issuecomment-1922328939