:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference
windows 11,11th Gen Intel(R) Core(TM) i9-11900K ,NVDIA RTX3090
Describe the bug
i just run the command "docker run -ti -p 8080:8080 --gpus all localai/localai:v2.10.0-cublas-cuda12-core mixtral-instruct" exactly the same as the offical website and it setup correctlly. I got correct response to see the mixtral-instruct model have started up.but when i try the command "curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d "{ \"model\": \"mixtral-instruct\", \"prompt\": \"How are you doing?\" }"" it response {"error":{"code":500,"message":"runtime error: invalid memory address or nil pointer dereference","type":""}}
To Reproduce
Expected behavior
really hope it can run normally.
Logs
the log in docker shows there were nothing wrong happened.
Additional context
LocalAI version:
localai:v2.10.0
Environment, CPU architecture, OS, and Version:
windows 11,11th Gen Intel(R) Core(TM) i9-11900K ,NVDIA RTX3090 Describe the bug
i just run the command "docker run -ti -p 8080:8080 --gpus all localai/localai:v2.10.0-cublas-cuda12-core mixtral-instruct" exactly the same as the offical website and it setup correctlly. I got correct response to see the mixtral-instruct model have started up.but when i try the command "curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d "{ \"model\": \"mixtral-instruct\", \"prompt\": \"How are you doing?\" }"" it response {"error":{"code":500,"message":"runtime error: invalid memory address or nil pointer dereference","type":""}} To Reproduce
Expected behavior
really hope it can run normally. Logs
the log in docker shows there were nothing wrong happened. Additional context