Open 1-eyx opened 4 months ago
Never saw that before, but probably due to old transfomers version.
I update it to the latest version and yeah, the same error.
Where are you man!
I can't reproduce the error so dunno how I can help.
Are you sure you updated transformers for the portable install specifically?
As in going to the ComfyUI_windows_portable\python_embeded
-folder and running:
python.exe -m pip install -U transformers
You can check your current version with:
python.exe -m pip show transformers
Uninstall Flash-Attention:
flash-attn
package:
pip uninstall flash-attn -y
Clone the Flash-Attention Repository:
git clone https://github.com/Dao-AILab/flash-attention.git
Navigate to the Cloned Repository:
flash-attention
directory using:
cd flash-attention
Install Flash-Attention:
flash-attn
package without build isolation:
pip install flash-attn . --no-build-isolation
Following these steps should resolve the error. After completing them, your setup should work without issues.
ip uni
Is there any specific directory I need to run these commands?
Flash attention itself is not needed to run Florence2 as long as you use sdpa or esfwr as attention mode. Make sure everything else is up to date, especially torch and transformers. Never seen the error myself so that's all I have.
Flash attention itself is not needed to run Florence2 as long as you use sdpa or esfwr as attention mode. Make sure everything else is up to date, especially torch and transformers. Never seen the error myself so that's all I have.
Hey Kijai, thank you very much for all your work and for taking the time to reply. I was wondering if running it in 'flash_attn_2' would improve my performance, but the node is running fine in 'sdpa'.
I didn't notice any real difference myself, it's already so fast.
Error occurred when executing DownloadAndLoadFlorence2Model:
No module named 'flash_attn_2_cuda'
the error happens with all precision settings and all attention settings
(The model is: Florence-2-base)