Closed vshapenko closed 10 months ago
If using BUILD_GRPC_FOR_BACKEND_LLAMA my PR might help you: #1576. Have a read there. Make sure to make clean
before starting over.
If using BUILD_GRPC_FOR_BACKEND_LLAMA my PR might help you: #1576. Have a read there. Make sure to
make clean
before starting over.
I will take a look, thx a lot
If using BUILD_GRPC_FOR_BACKEND_LLAMA my PR might help you: #1576. Have a read there. Make sure to
make clean
before starting over.
I tried to build with new makefile (as i understood, your PR is merged already), still no luck. Looks like weird problem for me, because i am getting all these "files generated with older version of protobuf" messages. I checked, all the protoc installed on machine have same version, so looks like any help appreciated, @dionysius.
Yes, so the changes in the PR do not "solve" anything. As seen there this workaround is a technical debt and shouldn't have existed in the first place. But It is still good that you got a new error, that was the point in that PR. And I've seen that one myself.
BUILD_GRPC_FOR_BACKEND_LLAMA builds the full required grpc dependencies, so there are no extra installed libaries required. But you seem to have them installed. Uninstall some packages (see below) if you can remove them safely and don't need them otherwise, they are now hindering the build process.
Please post the new errors if any. In the meantime I'll setup the same situation and see if I can exclude preinstalled libraries during that build.
Update: having those not installed See PR #1593protobuf-compiler protobuf-compiler-grpc protobuf-c-compiler libprotobuf-c-dev libprotobuf-dev libprotoc-dev libgrpc-dev libgrpc++-dev
makes the build with BUILD_GRPC_FOR_BACKEND_LLAMA
succeed (so far). Effectively might only be a subset of those packages.
I found a culprit, the fix is quite simple, see: https://github.com/mudler/LocalAI/pull/1593/files
I made this change locally, but still get following errors:
In file included from /home/vshapenko/LocalAI/backend/cpp/llama/llama.cpp/build/examples/grpc-server/backend.grpc.pb.cc:5: /home/vshapenko/LocalAI/backend/cpp/llama/llama.cpp/build/examples/grpc-server/backend.pb.h:17:2: error: #error This file was generated by an older version of protoc which is 17 | #error This file was generated by an older version of protoc which is | ^~~~~ /home/vshapenko/LocalAI/backend/cpp/llama/llama.cpp/build/examples/grpc-server/backend.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please 18 | #error incompatible with your Protocol Buffer headers. Please | ^~~~~ /home/vshapenko/LocalAI/backend/cpp/llama/llama.cpp/build/examples/grpc-server/backend.pb.h:19:2: error: #error regenerate this file with a newer version of protoc. 19 | #error regenerate this file with a newer version of protoc.
Looks like local version of protoc is involved somewhere also.
Update: Looks like i made some changes to cmakelist.txt which didn't let me with fix provided above. So, kudos to @dionysius , problem solved. My respect
LocalAI version: Latest mater
Environment, CPU architecture, OS, and Version: Linux DGXA-Node26 5.15.0-1042-nvidia #42-Ubuntu SMP Wed Nov 15 20:28:30 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Describe the bug I am trying to build LocalAI locally following the build guide. Regardless of setting BUILD_GRPC_FOR_BACKEND_LLAMA i get a bunch of errors related to grpc:
So, i request a help of somebody experienced in setting up build environment on Ubuntu, so we could identify and get rid off the root cause on ubuntu.