mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
https://localai.io
MIT License
25.82k stars 1.94k forks source link

There is something wrong with VLM #2668

Open techResearcher2021 opened 4 months ago

techResearcher2021 commented 4 months ago

LocalAI version:

using docker image: latest-aio-gpu-nvidia-cuda-12

Environment, CPU architecture, OS, and Version:

Linux 0fe2bf31da79 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Describe the bug

VL Model llava-v1.6-mistral-7b.Q5_K_M does not work correctly.

To Reproduce

I just run the docker container and chat by http://localhost:8080/chat/gpt-4-vision-preview. I upload an image and ask 'What the image describes ?

Expected behavior

The model runs inference and response the question.

Logs

2024-06-27 14:26:30 6:26AM INF LocalAI version: v2.17.1 (8142bdc48f3619eddc6344fa4ed83b331f7b37c2) 2024-06-27 14:26:30 WARNING: failed to determine nodes: open /sys/devices/system/node: no such file or directory 2024-06-27 14:26:30 WARNING: failed to read int from file: open /sys/class/drm/card0/device/numa_node: no such file or directory 2024-06-27 14:26:30 WARNING: failed to determine nodes: open /sys/devices/system/node: no such file or directory 2024-06-27 14:26:30 WARNING: error parsing the pci address "vgem"

....

2024-06-27 14:27:27 6:27AM INF Loading model 'llava-v1.6-mistral-7b.Q5_K_M.gguf' with backend llama-cpp 2024-06-27 14:27:27 WARNING: failed to determine nodes: open /sys/devices/system/node: no such file or directory 2024-06-27 14:27:27 WARNING: failed to read int from file: open /sys/class/drm/card0/device/numa_node: no such file or directory 2024-06-27 14:27:27 WARNING: failed to determine nodes: open /sys/devices/system/node: no such file or directory 2024-06-27 14:27:27 6:27AM INF Success ip=172.17.0.1 latency=1.21655ms method=POST status=200 url=/v1/chat/completions 2024-06-27 14:27:27 6:27AM INF Loading model 'llava-v1.6-mistral-7b.Q5_K_M.gguf' with backend llama-cpp 2024-06-27 14:27:27 WARNING: error parsing the pci address "vgem" 2024-06-27 14:27:27 6:27AM INF [llama-cpp] attempting to load with AVX2 variant 2024-06-27 14:27:29 WARNING: failed to determine nodes: open /sys/devices/system/node: no such file or directory 2024-06-27 14:27:29 WARNING: failed to read int from file: open /sys/class/drm/card0/device/numa_node: no such file or directory 2024-06-27 14:27:29 WARNING: failed to determine nodes: open /sys/devices/system/node: no such file or directory 2024-06-27 14:27:29 WARNING: error parsing the pci address "vgem" 2024-06-27 14:27:29 6:27AM INF [llama-cpp] attempting to load with AVX2 variant

Additional context

pedroresende commented 2 months ago

any news on this ?

mhaustria2 commented 1 month ago

I am getting the same error, but only since today. Worked like a charm the last 4 days. Any idea what the cause could be?

gklank commented 1 month ago

Hi,

I am using image: localai/localai:v2.22.0-aio-gpu-nvidia-cuda-11 and I have the same issue!

Any ideas for help?

Regards

Gerhard