mudler / LocalAI

:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
https://localai.io
MIT License
21.65k stars 1.66k forks source link

can't build backend/cpp/llama, cmake can't find protobuf #1386

Closed dionysius closed 6 months ago

dionysius commented 7 months ago

LocalAI version: commit 67966b623cd92602406057ce4214577e0a00197d

Environment, CPU architecture, OS, and Version:

AMD Ryzen 7 7840HS with Radeon 780M Graphics
Linux cappuccino 6.1.0-13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) x86_64 GNU/Linux
Devuan GNU/Linux 5 (daedalus) #Based on Debian Bookworm (version 12) with Linux kernel 6.1

Describe the bug Even after installing various libproto*-dev, cmake can't find protobuf. Unfortunately I'm not familiar with C++ to help myself further.

To Reproduce

make clean
make build #until it fails
cd backend/cpp/llama
make rebuild

Expected behavior Builds with the available Protobuf library packages

Logs

$ make rebuild
cp -rfv /home/dionysius/Projects/github.com/mudler/LocalAI/backend/cpp/llama/CMakeLists.txt llama.cpp/examples/grpc-server/
'/home/dionysius/Projects/github.com/mudler/LocalAI/backend/cpp/llama/CMakeLists.txt' -> 'llama.cpp/examples/grpc-server/CMakeLists.txt'
cp -rfv /home/dionysius/Projects/github.com/mudler/LocalAI/backend/cpp/llama/grpc-server.cpp llama.cpp/examples/grpc-server/
'/home/dionysius/Projects/github.com/mudler/LocalAI/backend/cpp/llama/grpc-server.cpp' -> 'llama.cpp/examples/grpc-server/grpc-server.cpp'
cp -rfv /home/dionysius/Projects/github.com/mudler/LocalAI/backend/cpp/llama/json.hpp llama.cpp/examples/grpc-server/
'/home/dionysius/Projects/github.com/mudler/LocalAI/backend/cpp/llama/json.hpp' -> 'llama.cpp/examples/grpc-server/json.hpp'
rm -rf grpc-server
make grpc-server
make[1]: Entering directory '/home/dionysius/Projects/github.com/mudler/LocalAI/backend/cpp/llama'
cd llama.cpp && mkdir -p build && cd build && cmake ..  && cmake --build . --config Release
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
CMake Error at examples/grpc-server/CMakeLists.txt:26 (find_package):
  Could not find a package configuration file provided by "Protobuf" with any
  of the following names:

    ProtobufConfig.cmake
    protobuf-config.cmake

  Add the installation prefix of "Protobuf" to CMAKE_PREFIX_PATH or set
  "Protobuf_DIR" to a directory containing one of the above files.  If
  "Protobuf" provides a separate development package or SDK, be sure it has
  been installed.

-- Configuring incomplete, errors occurred!
See also "/home/dionysius/Projects/github.com/mudler/LocalAI/backend/cpp/llama/llama.cpp/build/CMakeFiles/CMakeOutput.log".
make[1]: *** [Makefile:49: grpc-server] Error 1
make[1]: Leaving directory '/home/dionysius/Projects/github.com/mudler/LocalAI/backend/cpp/llama'
make: *** [Makefile:42: rebuild] Error 2

Additional context Related to https://github.com/mudler/LocalAI/issues/1196, but I am only stuck with protobuf, installing libabsl-dev and other required libraries fixed previous errors.

I can build https://github.com/ggerganov/llama.cpp itself without error.

I can't find which libproto should be installed, the search only yields the CMakeLists.txt where it's used.

I have not set any gpu toolset in any env. I expect running on CPU for helping fixing go related bugs and features should be fine and/or can rebuild with gpu support later.

Installed packages:

$ dpkg -l | grep libproto
ii  libprotobuf-c-dev:amd64                                     1.4.1-1+b1                          amd64        Protocol Buffers C static library and headers (protobuf-c)
ii  libprotobuf-c1:amd64                                        1.4.1-1+b1                          amd64        Protocol Buffers C shared library (protobuf-c)
ii  libprotobuf-dev:amd64                                       3.21.12-3                           amd64        protocol buffers C++ library (development files) and proto files
ii  libprotobuf-lite32:amd64                                    3.21.12-3                           amd64        protocol buffers C++ library (lite version)
ii  libprotobuf32:amd64                                         3.21.12-3                           amd64        protocol buffers C++ library
ii  libprotoc-dev:amd64                                         3.21.12-3                           amd64        protocol buffers compiler library (development files)
ii  libprotoc32:amd64                                           3.21.12-3                           amd64        protocol buffers compiler library
ii  libprotozero-dev                                            1.7.1-1                             amd64        Minimalistic protocol buffer decoder and encoder in C++

what cmake FindProtobuf tells me:

$ cmake --help-module FindProtobuf
FindProtobuf
------------

Locate and configure the Google Protocol Buffers library.

.. versionadded:: 3.6
  Support for ``find_package()`` version checks.

.. versionchanged:: 3.6
  All input and output variables use the ``Protobuf_`` prefix.
  Variables with ``PROTOBUF_`` prefix are still supported for compatibility.

The following variables can be set and are optional:

``Protobuf_SRC_ROOT_FOLDER``
  When compiling with MSVC, if this cache variable is set
  the protobuf-default VS project build locations
  (vsprojects/Debug and vsprojects/Release
  or vsprojects/x64/Debug and vsprojects/x64/Release)
  will be searched for libraries and binaries.
``Protobuf_IMPORT_DIRS``
  List of additional directories to be searched for
  imported .proto files.
``Protobuf_DEBUG``
  .. versionadded:: 3.6

  Show debug messages.
``Protobuf_USE_STATIC_LIBS``
  .. versionadded:: 3.9

  Set to ON to force the use of the static libraries.
  Default is OFF.

Defines the following variables:

``Protobuf_FOUND``
  Found the Google Protocol Buffers library
  (libprotobuf & header files)
``Protobuf_VERSION``
  .. versionadded:: 3.6

  Version of package found.
``Protobuf_INCLUDE_DIRS``
  Include directories for Google Protocol Buffers
``Protobuf_LIBRARIES``
  The protobuf libraries
``Protobuf_PROTOC_LIBRARIES``
  The protoc libraries
``Protobuf_LITE_LIBRARIES``
  The protobuf-lite libraries

.. versionadded:: 3.9
  The following ``IMPORTED`` targets are also defined:

``protobuf::libprotobuf``
  The protobuf library.
``protobuf::libprotobuf-lite``
  The protobuf lite library.
``protobuf::libprotoc``
  The protoc library.
``protobuf::protoc``
  .. versionadded:: 3.10
    The protoc compiler.

The following cache variables are also available to set or use:

``Protobuf_LIBRARY``
  The protobuf library
``Protobuf_PROTOC_LIBRARY``
  The protoc library
``Protobuf_INCLUDE_DIR``
  The include directory for protocol buffers
``Protobuf_PROTOC_EXECUTABLE``
  The protoc compiler
``Protobuf_LIBRARY_DEBUG``
  The protobuf library (debug)
``Protobuf_PROTOC_LIBRARY_DEBUG``
  The protoc library (debug)
``Protobuf_LITE_LIBRARY``
  The protobuf lite library
``Protobuf_LITE_LIBRARY_DEBUG``
  The protobuf lite library (debug)

Example:

 find_package(Protobuf REQUIRED)
 include_directories(${Protobuf_INCLUDE_DIRS})
 include_directories(${CMAKE_CURRENT_BINARY_DIR})
 protobuf_generate_cpp(PROTO_SRCS PROTO_HDRS foo.proto)
 protobuf_generate_cpp(PROTO_SRCS PROTO_HDRS EXPORT_MACRO DLL_EXPORT foo.proto)
 protobuf_generate_cpp(PROTO_SRCS PROTO_HDRS DESCRIPTORS PROTO_DESCS foo.proto)
 protobuf_generate_python(PROTO_PY foo.proto)
 add_executable(bar bar.cc ${PROTO_SRCS} ${PROTO_HDRS})
 target_link_libraries(bar ${Protobuf_LIBRARIES})

.. note::
  The ``protobuf_generate_cpp`` and ``protobuf_generate_python``
  functions and ``add_executable()`` or ``add_library()``
  calls only work properly within the same directory.

.. command:: protobuf_generate_cpp

  Add custom commands to process ``.proto`` files to C++::

   protobuf_generate_cpp (<SRCS> <HDRS>
       [DESCRIPTORS <DESC>] [EXPORT_MACRO <MACRO>] [<ARGN>...])

 ``SRCS``
   Variable to define with autogenerated source files
 ``HDRS``
   Variable to define with autogenerated header files
 ``DESCRIPTORS``
   .. versionadded:: 3.10
     Variable to define with autogenerated descriptor files, if requested.
 ``EXPORT_MACRO``
   is a macro which should expand to ``__declspec(dllexport)`` or
   ``__declspec(dllimport)`` depending on what is being compiled.
 ``ARGN``
   ``.proto`` files

.. command:: protobuf_generate_python

  .. versionadded:: 3.4

  Add custom commands to process ``.proto`` files to Python::

   protobuf_generate_python (<PY> [<ARGN>...])

 ``PY``
   Variable to define with autogenerated Python files
 ``ARGN``
   ``.proto`` files
yhyu13 commented 7 months ago

@dionysius Use vcpkg to install all missing deps, and then add vckpg.cmake to llamacpp's cmake list before setupproject.

It worked for me. Microsoft's detailed steps on using vcpkg

localai-bot commented 7 months ago

:warning::warning::warning::warning::warning:

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

_but.... I can also be funny or helpful :smilecat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me, I don't support (yet) replying!

:warning::warning::warning::warning::warning:

ERROR: The prompt size exceeds the context window size and cannot be processed.

Sources:

KevinGage commented 7 months ago

I am running into the same issue on ubuntu 22.04 in WSL2. Same symptoms. I am able to compile llama.cpp and did also need to install libabsl-dev to get this far. I am not familiar with vcpkg so any additional help would be really appreciated.

B4ckslash commented 6 months ago

AFAIK this is because not every distribution (Fedora, Debian and derivatives among them) builds Protobuf with CMake, which leads to the .cmake files missing for those distros, so CMake cannot find them when building llama.cpp.

At the expense of extra compile time, you can add BUILD_GRPC_FOR_BACKEND_LLAMA=ON to your make command to clone gRPC & protobuf as submodules and compile them alongside llama.cpp. This should fix the lookup problems, as it no longer has to look it up at all.

See #1232.

dionysius commented 6 months ago

Thank you, this BUILD_GRPC_FOR_BACKEND_LLAMA=ON did the trick! I'm confused how I did not see that as my linked issue discussed that part. Well then this issue is just a duplicate of #1196

KevinGage commented 6 months ago

BUILD_GRPC_FOR_BACKEND_LLAMA=ON

For what it's worth this did not resolve my issue. I receive another error message when trying to build with BUILD_GRPC_FOR_BACKEND_LLAMA=ON. But I can open a separate issue for that error.