Blaizzy / mlx-vlm

MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.
MIT License
416 stars 35 forks source link

gradio chat exception #103

Open faev999 opened 3 days ago

faev999 commented 3 days ago

hi all, I had the following exception when trying to run the gradio example with: python -m mlx_vlm.chat_ui --model mlx-community/Qwen2-VL-72B-Instruct-4bit

.../mlx_vlm/chat_ui.py", line 32, in chat
    if len(message.files) >= 1:

AttributeError: 'dict' object has no attribute 'files'

when using the CLI example with: python -m mlx_vlm.generate --model mlx-community/Qwen2-VL-72B-Instruct-4bit --max-tokens 100 --temp 0.0 --image http://images.cocodataset.org/val2017/000000039769.jpg there's no exception.

I installed the package with: pip install mlx-vlm and have tried python 3.12 and python 3.10 with and got the same result

kaimerklein commented 1 day ago

I modified the chat function in chat_ui.py like this:

def chat(message, history, temperature, max_tokens):
    chat = []
    if len(message["files"]) >= 1:
        chat.append(message["text"])
    else:
        raise gr.Error("Please upload an image. Text only chat is not supported.")

    files = message["files"][-1]
    if model.config.model_type != "paligemma":
        messages = apply_chat_template(processor, config, message["text"], num_images=1)
    else:
        messages = message["text"]

    response = ""
    for chunk in stream_generate(
        model, processor, files, messages, image_processor, max_tokens, temp=temperature
    ):
        response += chunk
        yield response

Seems to work.