kijai / ComfyUI-LuminaWrapper

MIT License
178 stars 7 forks source link

Hi, I'm not sure what's going on. I have already built Flash Attention, but I still get the warning: "UserWarning: Cannot import apex RMSNorm, trying flash_attn RMSNorm." #27

Open wibur0620 opened 2 months ago

wibur0620 commented 2 months ago

Total VRAM 8188 MB, total RAM 65268 MB pytorch version: 2.2.2+cu121 xformers version: 0.0.25.post1 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync

Loading: ComfyUI-Impact-Pack (V5.18.12)

Loading: ComfyUI-Impact-Pack (Subpack: V0.6)

[Impact Pack] Wildcards loading done. D:\AI\ComfyUI\custom_nodes\ComfyUI-LuminaWrapper\lumina_models\components.py:10: UserWarning: Cannot import apex RMSNorm, trying flash_attn RMSNorm warnings.warn("Cannot import apex RMSNorm, trying flash_attn RMSNorm") D:\AI\ComfyUI\custom_nodes\ComfyUI-LuminaWrapper\lumina_models\components.py:17: UserWarning: Cannot import flash_attn RMSNorm, falling back to PyTorch RMSNorm warnings.warn("Cannot import flash_attn RMSNorm, falling back to PyTorch RMSNorm") Flash Attention is available

Loading: ComfyUI-Manager (V2.44.1)

ComfyUI Revision: 2327 [24b969d3] | Released on '2024-07-03'

ComfyUI-N-Sidebar is loading... image

kijai commented 2 months ago

That's normal on windows, it's just a warning about separate optimisation to flash_attn. You'd need to install apex for the faster RMSNorm, or use Linux that can run it with Triton.

pandayummy commented 2 months ago

I have the same problem. after several hours of investigation, I found it impossible to install apex +cuda +C++ on windows. so , just ignore it.