Closed luoweb closed 5 months ago
the issue solved when rebuilt without BUILD_TYPE=metal, I think this is a bug with BUILD_TYPE=metal, Maybe the binary ingegate the build flag like the build version
I also ran into this, but I had a slightly different error. The process crashes as soon as I try to use an api endpoint that requires loading of the model. For example:
% curl http://127.0.0.1:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "ggml-gpt4all-j.bin",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
curl: (52) Empty reply from server
curl
sees the server close the connection with no content, and the following SIGILL: illegal instruction
output appears in the container where local-ai is running.
I built it with BUILD_TYPE=
set to an empty value. This is in the quay.io/go-skynet/local-ai:v1.20.1-ffmpeg
docker image.
My guess is that local-ai has some code included that my very old processor has a hard time with. The machine has a ton of RAM, and I was hoping to run at least some models. I too ran into this trying to run the gpt4all example. In particular, here's the helm command used, along with my values:
% helm upgrade --install -n ai go-skynet/local-ai --version 2.1.0 --values values.yaml
@programmerq Same error.
Intel(R) Xeon(R) CPU E5-2667 v2 @ 3.30GHz
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 62
model name : Intel(R) Xeon(R) CPU E5-2667 v2 @ 3.30GHz
stepping : 4
microcode : 0x42e
cpu MHz : 3299.828
cache size : 25600 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush acpi mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cpuid_fault pti ssbd ibrs ibpb stibp fsgsbase smep erms xsaveopt md_clear flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown
bogomips : 6600.00
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!
_but.... I can also be funny or helpful :smilecat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.
Don't engage in conversation with me, I don't support (yet) replying!
Firstly, the error seems to be caused by a Go program and the FastHTTP library is involved.
If it happens again, I would recommend checking for any logs related to the error, including stack traces, and share them with me.
Another tip would be to ensure that your GOPATH or GOOCD environment variables are set up correctly. Make sure that the necessary GOPATHs have been added, so that it is possible to import local packages correctly.
In addition, if you use a program other than Go in this process, make sure it's configured correctly too.
If I receive more information on your context or specific issue, I can provide a better response next time.
Sources:
LocalAI version:
(base) block@192 LocalAI % ./local-ai --version LocalAI version LocalAI v1.20.1-1-g3829aba-dirty (3829aba869f8925dde7a1c9f280a4718dda3a18c)
Environment, CPU architecture, OS, and Version:
Mac OS 13.4 M2 Pro
Describe the bug
test the localai-embeddings as https://docs.flowiseai.com/embeddings/localai-embeddings, local-ai process always exited
To Reproduce
localai-embeddings:
test the localai-embeddings as https://docs.flowiseai.com/embeddings/localai-embeddings, local-ai process always exited
Expected behavior
./local-ai --models-path /Users/block/code/data/models --debug true
Logs
Additional context