Open TwinFinz opened 3 weeks ago
Container V2.23.0 contains same issue (assuming a llama-cpp issue)
LocalAi-GPT | make[2]: Entering directory '/build/backend/cpp/llama-avx'
LocalAi-GPT | mkdir -p llama.cpp/examples/grpc-server
LocalAi-GPT | bash prepare.sh
LocalAi-GPT | Applying patch 01-llava.patch
LocalAi-GPT | patching file examples/llava/clip.cpp
LocalAi-GPT | patch unexpectedly ends in middle of line
LocalAI version: localai/localai:latest-gpu-nvidia-cuda-12 : SHA ff0b3e63d517 (Also occurs on v2.22.1 container image)
Environment, CPU architecture, OS, and Version: Linux server 6.8.0-47-generic #47-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 21:40:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux OS Version: Ubuntu 24.04 Portainer: 2.19.5-CE CPU: intel 13900K Ram: 32gb GPU: 4090
Describe the bug (Suspected issue: "Bugged" LLAMA-CPP builds on later versions) "builds everything for about 2 hours and ends with this" Upon reproducing the "bug" it seems to be expecting input from the user.
To Reproduce deploy docker using the following docker-compose.yaml
Expected behavior Build and launch of server
Logs Log Provided by user:
Log from v2.22.1 container:
Additional context I am not the one who personally experienced this issue, It is an issue reported in the #Help discord channel (Creating issue as requested) Altho i have reproduced the issue on another image.