Open hzlcodus opened 7 months ago
This is caused by the missing installation of some libs given in flash attention. You need to get the source code of flash attention, and then install layer_norm as in https://github.com/Dao-AILab/flash-attention/blob/main/csrc/layer_norm/README.md and fused_mlp as in https://github.com/Dao-AILab/flash-attention/blob/main/csrc/fused_dense_lib/README.md.
We will update installation doc soon.
If your machine did not support the installation of these libs, you could alter the settings in config.py that does not use half precision and bf16 for running. In that case, the code would use naive attention impl instead of flash attention.
@shepnerd can you please specify where to make changes ? my hardware does not support flas attention, also i just wanted to test inference of the model from the demo notebook
You can refer to this instruction to install dependencies to run flash-attn with layernorm and other components.
If your hardware does not support flash-attn and its dependenies installation, you can try common attention by setting using full-precision compute in config.py for bypassing it. Taking internvideo2_stage2_config.py as an example, you need to set the following variables to False
.
use_half_precision = False
use_bf16 = False
use_flash_attn=False,
use_fused_rmsnorm=False,
use_fused_mlp=False,
I am still having issues despite installing flash-attn and changing the config.py.
ModuleNotFoundError: No module named 'dropout_layer_norm' when
ModuleNotFoundError Traceback (most recent call last) Code/InternVideo/InternVideo2/multi_modality/demo.ipynb Cell 1 line 1 6 import torch 8 from config import (Config, 9 eval_dict_leaf) ---> 11 from utils import (retrieve_text, 12 _frame_from_video, 13 setup_internvideo2)
File //Code/InternVideo/InternVideo2/multi_modality/utils.py:9 6 import torch 7 from torch import nn ----> 9 from models.backbones.internvideo2 import pretrain_internvideo2_1b_patch14_224 10 from models.backbones.bert.builder import build_bert 11 from models.criterions import get_sim
File /Code/InternVideo/InternVideo2/multi_modality/models/backbones/internvideo2/init.py:1 ----> 1 from .internvl_clip_vision import internvl_clip_6b 2 from .internvideo2 import pretrain_internvideo2_1b_patch14_224, pretrain_internvideo2_6b_patch14_224 3 from .internvideo2_clip_vision import InternVideo2
File /Code/InternVideo/InternVideo2/multi_modality/models/backbones/internvideo2/internvl_clip_vision.py:16 14 from flash_attention_class import FlashAttention 15 from flash_attn.modules.mlp import FusedMLP ... 10 def maybe_align(x, alignment_in_bytes=16): 11 """Assume that x already has last dim divisible by alignment_in_bytes 12 """
ModuleNotFoundError: No module named 'dropout_layer_norm'
Can you help me? I installed flash-attn 2.6.3 and cd csrc/layer_norm && pip install . in order to use InternVideo2. , but when I import dropout_layer_norm, it reports a segment error and I've spent a day trying to find out what the cause is.
Error occurred while running demo.ipynb in InternVideo2's multi_modality demo. I installed packages according to requirements.txt.