Open ark626 opened 8 months ago
I think libfalcon is not compatible with CUDA.
As a workaround you can actually skip building the falcon backend:
make GRPC_BACKENDS=backend-assets/grpc/llama-cpp build
see also the docs for example: https://localai.io/basics/build/#build-only-a-single-backend
LocalAI version:
v1.25.0 Environment, CPU architecture, OS, and Version: CPU aarch64, Ubuntu 20.04
Describe the bug
While trying to run BUILD_GRPC_FOR_BACKEND_LLAMA=ON make build after adding the following fix to the tagged version v1.25.0
I get the following followup error:
Additional context Is there any version capable of getting compiled on Jetson Xavier AGX?