IntelLabs / lvlm-interpret

Apache License 2.0
47 stars 7 forks source link

cannot import name 'draw_graph' from 'plot_utils' #1

Closed Iust1n2 closed 5 months ago

Iust1n2 commented 5 months ago

I got this error after i run the app with python app.py --model_name_or_path Intel/llava-gemma-2b --load_8bit:

Traceback (most recent call last):
  File "/Users//Desktop/lvlm-interpret/app.py", line 30, in <module>
    demo = build_demo(args, embed_mode=False)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users//Desktop/lvlm-interpret/utils_gradio.py", line 224, in build_demo
    processor, model = get_processor_model(args)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users//Desktop/lvlm-interpret/utils_model.py", line 44, in get_processor_model
    model = LlavaForConditionalGeneration.from_pretrained(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/anaconda3/envs/attention-viz/lib/python3.12/site-packages/transformers/modeling_utils.py", line 3049, in from_pretrained
    hf_quantizer.validate_environment(
  File "/opt/anaconda3/envs/attention-viz/lib/python3.12/site-packages/transformers/quantizers/quantizer_bnb_8bit.py", line 62, in validate_environment
    raise ImportError(
ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`

I'm currently running accelerate version 0.30.1 and bitsandbytes version 0.42.0 on Mps with Python 3.10

shaoyent-IL commented 5 months ago

From the error message it looks like an import error with bitsandbytes I am also using bitsandbytes==0.42.0, accelerate==0.30.1, and transformers==4.39.3 You can check if your environment is set up correctly. Are you running on GPU?

Iust1n2 commented 5 months ago

Needed to pull from causality labs submodule after initializing. Everything works properly on GPU. Btw, do you plan on releasing support to other VLMs from HuggingFace as in VL-Interpret? I would very much like to use this tool for interpretability purposes.