Closed dionysius closed 6 months ago
@dionysius Use vcpkg to install all missing deps, and then add vckpg.cmake to llamacpp's cmake list before setupproject.
It worked for me. Microsoft's detailed steps on using vcpkg
Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!
_but.... I can also be funny or helpful :smilecat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.
Don't engage in conversation with me, I don't support (yet) replying!
ERROR: The prompt size exceeds the context window size and cannot be processed.
Sources:
I am running into the same issue on ubuntu 22.04 in WSL2. Same symptoms. I am able to compile llama.cpp and did also need to install libabsl-dev
to get this far.
I am not familiar with vcpkg so any additional help would be really appreciated.
AFAIK this is because not every distribution (Fedora, Debian and derivatives among them) builds Protobuf with CMake, which leads to the .cmake
files missing for those distros, so CMake cannot find them when building llama.cpp
.
At the expense of extra compile time, you can add BUILD_GRPC_FOR_BACKEND_LLAMA=ON
to your make
command to clone gRPC & protobuf as submodules and compile them alongside llama.cpp
. This should fix the lookup problems, as it no longer has to look it up at all.
See #1232.
Thank you, this BUILD_GRPC_FOR_BACKEND_LLAMA=ON
did the trick! I'm confused how I did not see that as my linked issue discussed that part. Well then this issue is just a duplicate of #1196
BUILD_GRPC_FOR_BACKEND_LLAMA=ON
For what it's worth this did not resolve my issue. I receive another error message when trying to build with BUILD_GRPC_FOR_BACKEND_LLAMA=ON
. But I can open a separate issue for that error.
LocalAI version: commit 67966b623cd92602406057ce4214577e0a00197d
Environment, CPU architecture, OS, and Version:
Describe the bug Even after installing various
libproto*-dev
, cmake can't find protobuf. Unfortunately I'm not familiar with C++ to help myself further.To Reproduce
Expected behavior Builds with the available Protobuf library packages
Logs
Additional context Related to https://github.com/mudler/LocalAI/issues/1196, but I am only stuck with protobuf, installing
libabsl-dev
and other required libraries fixed previous errors.I can build https://github.com/ggerganov/llama.cpp itself without error.
I can't find which libproto should be installed, the search only yields the CMakeLists.txt where it's used.
I have not set any gpu toolset in any env. I expect running on CPU for helping fixing go related bugs and features should be fine and/or can rebuild with gpu support later.
Installed packages:
what cmake FindProtobuf tells me: