Dao-AILab / flash-attention

Fast and memory-efficient exact attention
BSD 3-Clause "New" or "Revised" License
14.34k stars 1.35k forks source link

ImportError: flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c104cuda9SetDeviceEi #966

Open foreverpiano opened 5 months ago

foreverpiano commented 5 months ago

I build flash_attn from source code with pytorch.2.3.0

code

>>> from flash_attn import flash_attn_2_cuda
A100-80G cuda 12.1

my env:

accelerate==0.29.3
aiofiles==23.2.1
aiohttp==3.9.5
aiosignal==1.3.1
altair==5.3.0
annotated-types==0.6.0
anyio==4.3.0
appdirs==1.4.4
async-timeout==4.0.3
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
contourpy==1.2.1
cpm-kernels==1.0.11
cycler==0.12.1
datasets==2.19.1
dill==0.3.8
distro==1.9.0
docker-pycreds==0.4.0
einops==0.8.0
exceptiongroup==1.2.1
fastapi==0.110.2
ffmpy==0.3.2
filelock==3.13.4
flash-attn==2.3.0
fonttools==4.51.0
frozenlist==1.4.1
fschat==0.2.36
fsspec==2024.3.1
gitdb==4.0.11
GitPython==3.1.43
gradio==4.28.3
gradio_client==0.16.0
h11==0.14.0
httpcore==1.0.5
httpx==0.27.0
huggingface-hub==0.22.2
idna==3.7
importlib_resources==6.4.0
iniconfig==2.0.0
Jinja2==3.1.3
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
kiwisolver==1.4.5
-e git+https://github.com/RulinShao/LongChat-dev@3677918c376a6f5debddf1f2d74987e1b3ed93e4#egg=longchat
markdown-it-py==3.0.0
markdown2==2.4.13
MarkupSafe==2.1.5
matplotlib==3.8.4
mdurl==0.1.2
mpmath==1.3.0
multidict==6.0.5
multiprocess==0.70.16
networkx==3.3
nh3==0.2.17
ninja==1.11.1.1
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.20.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.1.105
openai==1.23.6
orjson==3.10.1
packaging==24.0
pandas==2.2.2
pillow==10.3.0
pluggy==1.5.0
prompt-toolkit==3.0.43
protobuf==4.25.3
psutil==5.9.8
pyarrow==16.0.0
pyarrow-hotfix==0.6
pydantic==2.7.1
pydantic_core==2.18.2
pydub==0.25.1
Pygments==2.17.2
pyparsing==3.1.2
pytest==8.2.0
python-dateutil==2.9.0.post0
python-multipart==0.0.9
pytz==2024.1
PyYAML==6.0.1
referencing==0.35.0
regex==2024.4.16
requests==2.31.0
rich==13.7.1
rpds-py==0.18.0
ruff==0.4.2
safetensors==0.4.3
semantic-version==2.10.0
sentencepiece==0.2.0
sentry-sdk==2.0.1
setproctitle==1.3.3
shellingham==1.5.4
shortuuid==1.0.13
six==1.16.0
smmap==5.0.1
sniffio==1.3.1
starlette==0.37.2
svgwrite==1.4.3
sympy==1.12
tiktoken==0.6.0
tokenizers==0.19.1
tomli==2.0.1
tomlkit==0.12.0
toolz==0.12.1
torch==2.3.0+cu121
torchaudio==2.3.0+cu121
torchvision==0.18.0+cu121
tqdm==4.66.2
transformers==4.40.1
triton==2.1.0
typer==0.12.3
typing_extensions==4.11.0
tzdata==2024.1
urllib3==2.2.1
uvicorn==0.29.0
wandb==0.16.6
wavedrom==2.0.3.post3
wcwidth==0.2.13
websockets==11.0.3
xxhash==3.4.1
yarl==1.9.4

@tridao

foreverpiano commented 5 months ago

https://github.com/Dao-AILab/flash-attention/issues/931

saurabh-kataria commented 5 months ago

You may want to try simply MAX_JOBS=8 pip install flash-attn --no-build-isolation. It seems to be working for me for now. P.S. 8 JOBS seem to be taking 200GB RAM so you can adjust this parameter accordingly. P.P.S. Other people are noting this is not working for them. Some more info: I had reinstalled anaconda with python 3.8 and pytorch 2.3.0. Like others (@rkuo2000) are mentioning, only some specific versions are working. You can try them.

CyberTimon commented 5 months ago

Same error. @saurabh-kataria solution didn't work.

foreverpiano commented 5 months ago

@CyberTimon Yes, it can't work

CyberTimon commented 5 months ago

I could "fix" it by updating python to 3.11, but that's not the correct solution

rkuo2000 commented 5 months ago

python 3.10.14, Cuda 12.1, Ubuntu22.04.4 LTS torch==2.3.0, flash-attn==2.5.8 works (2.5.9post1 has the same failure)

yuquanle commented 5 months ago

python 3.10.14, Cuda 12.1, Ubuntu22.04.4 LTS torch==2.3.0, flash-attn==2.5.8 works (2.5.9post1 has the same failure)

Thanks. I try python 3.9.19. torch==2.3.0, flash-attn==2.5.8. It works.

Bellocccc commented 4 months ago

Thanks. Python 3.9.19,cuda12.2,torch2.3.0 flash_attn==2.5.8,it works!

oceaneLIU commented 1 month ago

Thanks, flash_attn==2.5.8 works!

zhangj1an commented 1 month ago

flash_attn == 2.5.8 works, thanks

Luo-Z13 commented 1 month ago

python 3.10.14, Cuda 12.1, Ubuntu22.04.4 LTS torch==2.3.0, flash-attn==2.5.8 works (2.5.9post1 has the same failure)

Hello, how do you install flash-attn2.5.8? I meet such error: @zhangj1an @oceaneLIU

Building wheels for collected packages: flash-attn
  Building wheel for flash-attn (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> [19 lines of output]
      fatal: not a git repository (or any of the parent directories): .git

      torch.__version__  = 2.4.1+cu121

      /opt/conda/envs/mgm/lib/python3.10/site-packages/setuptools/__init__.py:94: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
      !!

              ********************************************************************************
              Requirements should be satisfied by a PEP 517 installer.
              If you are using pip, you can try `pip install --use-pep517`.
              ********************************************************************************

      !!
        dist.fetch_build_eggs(dist.setup_requires)
      running bdist_wheel
      Guessing wheel URL:  https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.8/flash_attn-2.5.8+cu122torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
      error: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)>
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for flash-attn
  Running setup.py clean for flash-attn
Failed to build flash-attn
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (flash-attn)
bighuang624 commented 1 month ago

cuda11.8, python3.11, pytorch==2.3.0, flash_attn==2.5.8 works, thanks for all discussion!

conda install pytorch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.8/flash_attn-2.5.8+cu118torch2.3cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
XiaohanHwang commented 1 month ago

cuda11.8, python3.11, pytorch==2.3.0, flash_attn==2.5.8 works, thanks for all discussion!

conda install pytorch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.8/flash_attn-2.5.8+cu118torch2.3cxx11abiFALSE-cp311-cp311-linux_x86_64.whl

Thanks a lot! It works.

qsKinoko commented 1 week ago

cuda11.8, python3.11, pytorch==2.3.0, flash_attn==2.5.8 works, thanks for all discussion!cuda11.8,python3.11,pytorch==2.3.0,flash_attn==2.5.8有效,感谢所有讨论!

conda install pytorch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.8/flash_attn-2.5.8+cu118torch2.3cxx11abiFALSE-cp311-cp311-linux_x86_64.whl

It works. Thanks!