mustafaaljadery / gemma-2B-10M

Gemma 2B with 10M context length using Infini-attention.
933 stars 57 forks source link

can't install flash_attn #2

Closed Aniforka closed 4 months ago

Aniforka commented 4 months ago

My notebook: Windows 11 Pro 23H2 Intel i7-8750H GeForce GTX 1050Ti (Mobile) 32GB RAM (2666GHz)

pip install -r .\requirements.txt
Requirement already satisfied: torch in c:\users\anime\appdata\local\programs\python\python310\lib\site-packages (from -r .\requirements.txt (line 1)) (2.3.0)
Requirement already satisfied: transformers in c:\users\anime\appdata\local\programs\python\python310\lib\site-packages (from -r .\requirements.txt (line 2)) (4.40.2)
Requirement already satisfied: datasets in c:\users\anime\appdata\local\programs\python\python310\lib\site-packages (from -r .\requirements.txt (line 3)) (2.19.1)
Collecting flash_attn (from -r .\requirements.txt (line 4))
  Using cached flash_attn-2.5.8.tar.gz (2.5 MB)
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [20 lines of output]
      fatal: not a git repository (or any of the parent directories): .git
      C:\Users\Anime\AppData\Local\Temp\pip-install-zqspt8qf\flash-attn_c20c0c86c12c4a6083c44ea61c202e13\setup.py:78: UserWarning: flash_attn was requested, but nvcc was not found.  Are you sure your environment has nvcc available?  If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.
        warnings.warn(
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\Anime\AppData\Local\Temp\pip-install-zqspt8qf\flash-attn_c20c0c86c12c4a6083c44ea61c202e13\setup.py", line 134, in <module>
          CUDAExtension(
        File "C:\Users\Anime\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\cpp_extension.py", line 1077, in CUDAExtension
          library_dirs += library_paths(cuda=True)
        File "C:\Users\Anime\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\cpp_extension.py", line 1211, in library_paths
          paths.append(_join_cuda_home(lib_dir))
        File "C:\Users\Anime\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\cpp_extension.py", line 2419, in _join_cuda_home
          raise OSError('CUDA_HOME environment variable is not set. '
      OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.

      torch.__version__  = 2.3.0+cpu

      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
1lmi commented 4 months ago

I can't install it either, if you find a solution, please tell me

Genolium commented 4 months ago

+

katsu-chan commented 4 months ago

it should work without flash_attn. just comment out imports in gemma.py, if it still doesn't work comment out FlashAttentionSomething class

Aniforka commented 4 months ago

it should work without flash_attn. just comment out imports in gemma.py, if it still doesn't work comment out FlashAttentionSomething class

It's work! Thanks

Aniforka commented 4 months ago

it should work without flash_attn. just comment out imports in gemma.py, if it still doesn't work comment out FlashAttentionSomething class

Did you manage to launch the model as a result? If yes, then share the modified one gamma.py Please