mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
https://localai.io
MIT License
26.36k stars 1.98k forks source link

stderr qemu: uncaught target signal 4 (Illegal instruction) - core dumped #1968

Closed DavidGOrtega closed 6 months ago

DavidGOrtega commented 7 months ago

LocalAI version: docker image latest-aio-cpu I assume v2.11.0

Environment, CPU architecture, OS, and Version: Macos 13.3.6 M1

Describe the bug Not a single api works. Always throw the error

infra-api-1  | 7:26PM DBG GRPC Service Started
infra-api-1  | 7:26PM DBG GRPC(phi-2-chat-127.0.0.1:35789): stderr qemu: uncaught target signal 4 (Illegal instruction) - core dumped

Reproduce

version: "3.9"
services:
  api:
    image: localai/localai:latest-aio-cpu
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
      interval: 1m
      timeout: 20m
      retries: 5
    ports:
      - 8080:8080
    environment:
      - DEBUG=true
    volumes:
      - ./models:/build/models:cached
docker-compose up

Logs In logs I can see also a possible error

2024-04-07 21:15:57 7:15PM INF [/build/backend/python/vall-e-x/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
olariuromeo commented 7 months ago

it seems to be a lack of memory, if you provide more information, maybe you could get help, also it seems that you have not compiled grpc backend. https://localai.io/basics/build/. to install golang in the local app root folder you can use asdf, https://asdf-vm.com/guide/getting-started.html then asdf install golang 1.21.7 asdf local golang 1.21.7 then compile the backend before to run the model, it is possible that you also need other dependencies, it depends on your system and probably update docker to the latest version

DavidGOrtega commented 7 months ago

@olariuromeo thanks for the reply.

  1. I have not compiled anything. Im using the docker image already built with all the deps (we assume)
  2. I have 12GB mem assigned to docker which is far more that needed for Phi-2 using llama.cpp.
  3. Even using embeddings I have the same error which is also a very small model.

Does this helps to bring some light?

localai-bot commented 7 months ago

I understand that you're experiencing issues with the LocalAI docker image in your local environment. From the information you've provided, it seems like the main error you're encountering is related to an "Illegal instruction" signal 4, which indicates a problem with the CPU instruction set not being supported. This is more likely to happen on ARM-based systems, like the M1 chip you mentioned.

Here are a few things you can try to resolve the issue:

  1. Check CPU compatibility: Ensure that the model you're trying to use is compatible with the CPU architecture of your system. Some models may not be built with explicit support for ARM-based systems, such as the M1 chip.
  2. Update Docker: Update Docker to the latest version to ensure you're using the most compatible version. Docker updates often include support for new hardware architectures.
  3. Check system dependencies: Make sure you have all the required dependencies installed in your system. In your case, the error message suggests that you might be missing the necessary dependencies to compile the GRPC backend. Install the required dependencies and then try compiling the backend manually before running the model.
  4. Try a different model or container: If the issue persists, try using a different model or a docker container that is known to work on your system architecture. This can help identify whether the issue is related to a particular model or container.

If none of the above steps resolve the issue, please provide more details about the specific model you're trying to use and any additional error messages you encounter. This will help in further troubleshooting the problem.

Remember that this is an experiment by @mudler, and though I'm designed to provide assistance, I may not always have the perfect solution. In such cases, please open an issue on the LocalAI GitHub repository for further assistance.