zhangfaen / finetune-Qwen2-VL

MIT License
191 stars 18 forks source link

No module named 'flash_attn_2_cuda' #16

Closed zarif98sjs closed 1 week ago

zarif98sjs commented 2 weeks ago

I installed everything from requirements.txt, but still get this error when I run finetune.py:

Traceback (most recent call last):
  File "/work/pi_hzamani_umass_edu/zarifalam_umass_edu/finetune-Qwen2-VL/qwen2venv/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1764, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 995, in exec_module
  File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
  File "/work/pi_hzamani_umass_edu/zarifalam_umass_edu/finetune-Qwen2-VL/qwen2venv/lib/python3.12/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 56, in <module>
    from flash_attn import flash_attn_varlen_func
  File "/work/pi_hzamani_umass_edu/zarifalam_umass_edu/finetune-Qwen2-VL/qwen2venv/lib/python3.12/site-packages/flash_attn/__init__.py", line 3, in <module>
    from flash_attn.flash_attn_interface import (
  File "/work/pi_hzamani_umass_edu/zarifalam_umass_edu/finetune-Qwen2-VL/qwen2venv/lib/python3.12/site-packages/flash_attn/flash_attn_interface.py", line 10, in <module>
    import flash_attn_2_cuda as flash_attn_cuda
ModuleNotFoundError: No module named 'flash_attn_2_cuda'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/work/pi_hzamani_umass_edu/zarifalam_umass_edu/finetune-Qwen2-VL/finetune.py", line 6, in <module>
    from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
  File "<frozen importlib._bootstrap>", line 1412, in _handle_fromlist
  File "/work/pi_hzamani_umass_edu/zarifalam_umass_edu/finetune-Qwen2-VL/qwen2venv/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1755, in __getattr__
    value = getattr(module, name)
            ^^^^^^^^^^^^^^^^^^^^^
  File "/work/pi_hzamani_umass_edu/zarifalam_umass_edu/finetune-Qwen2-VL/qwen2venv/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1754, in __getattr__
    module = self._get_module(self._class_to_module[name])
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/work/pi_hzamani_umass_edu/zarifalam_umass_edu/finetune-Qwen2-VL/qwen2venv/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1766, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import transformers.models.qwen2_vl.modeling_qwen2_vl because of the following error (look up to see its traceback):
No module named 'flash_attn_2_cuda'
zhangfaen commented 1 week ago

flash_attention_2 is not easy to use, maybe you can try not set attn_implementation parameter, i.e. in finetune.py

model = Qwen2VLForConditionalGeneration.from_pretrained(
        "Qwen/Qwen2-VL-2B-Instruct", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map="auto"
    )

--->

model = Qwen2VLForConditionalGeneration.from_pretrained(
        "Qwen/Qwen2-VL-2B-Instruct", torch_dtype=torch.bfloat16, device_map="auto"
    )
zarif98sjs commented 1 week ago

even without that it needs flash_attention_2 when importing Qwen2VLForConditionalGeneration.

Anyway I was able to solve the issue by reinstalling flash attention.

zhangfaen commented 1 week ago

even without that it needs flash_attention_2 when importing Qwen2VLForConditionalGeneration.

Anyway I was able to solve the issue by reinstalling flash attention.

cool