Closed that1guy closed 6 months ago
Same here... Compiling from source with latest commit seeing if it is fixed...
EDIT1: Rebuilding did not help :(. This issue seems to be related to #1447 , #288 , and #950. I think it is a issue in llama.cpp
. I thought that commit 86a8df1c8b44c7e18aceae04cf9b912677c1bdb2 fixed it but is does not seem like it did...
EDIT2: Logs if they are useful:
Glad to hear I'm not alone. Let me know if you have a breakthrough. :)
Great News @that1guy ! Got it working! Building from source with this modified Dockerfile
worked! Here are the steps I used:
git clone https://github.com/mudler/LocalAI.git
cd LocalAI
Dockerfile
wget https://github.com/mudler/LocalAI/files/13705244/Dockerfile.txt -O Dockerfile
docker build -t localai .
@justaCasualCoder thanks for digging in and sharing your findings! When I manually ran a diff between the Dockerfile you found and the one in the main repo I noticed all your Dockerfile was doing was adding the same CPU flags.
Ultimately, I just pulled down the new 2.0.1 Dockerfile and everything worked. I guess I was just experience an edge case scenario related to 2.0.0.
My CPU only supports AVX,but not AVX2 or AVX512. This is causing issues I can't seem to workaround even when rebuidling with proper CPU Flags. See my .env
LocalAI version: Latest
Environment, CPU architecture, OS, and Version:
Describe the bug
To Reproduce Issue HTTP Request:
Receive HTTP 500 Error Response:
Expected behavior HTTP 200 response
Logs
Additional context Using following model and configs:
Lunademo.yaml: