Closed si2w closed 1 month ago
I have a similar error - the exception is the same, the message is. MBP M2 Max, 32GB, 15.1 with LM-Studio 0.3.4 (build 4). The model that I'm trying to load is Qwen2-VL-7B-Instruct-4bit. The same happens with the other Qwen VL models. In the same time I'm able to use successfully llava-v1.6-mistral-7b-4bit, Llama-3.2-3B-Instruct-8bit, and Qwen2.5-Coder-7B-Instruct-8bit
🥲 Failed to load the model Failed to load model Error when loading model: ValueError: Model type qwen2_vl not supported.
Our Apple MLX engine currently only supports macOS 14 and up. Sorry for the inconvenience
I tried LMStudio 0.3.4 on Apple Studio M2 Max 32 Gb (Ventura 13.5 (22G74)) with both llama 3.1 and 3.2 in MLX format. I don't get any response from LMStudio (Error 6). What's the issue ?