Closed Iust1n2 closed 5 months ago
From the error message it looks like an import error with bitsandbytes
I am also using bitsandbytes==0.42.0
, accelerate==0.30.1
, and transformers==4.39.3
You can check if your environment is set up correctly.
Are you running on GPU?
Needed to pull from causality labs submodule after initializing. Everything works properly on GPU. Btw, do you plan on releasing support to other VLMs from HuggingFace as in VL-Interpret? I would very much like to use this tool for interpretability purposes.
I got this error after i run the app with
python app.py --model_name_or_path Intel/llava-gemma-2b --load_8bit
:I'm currently running accelerate version 0.30.1 and bitsandbytes version 0.42.0 on Mps with Python 3.10