Closed yeahdongcn closed 4 months ago
@thxCode Could you please run the workflow to check if the build succeeds? Thanks!
The CMake version has been updated in the latest commit to align with our base image, Ubuntu 20.04.
The project hierarchy differs slightly from Ollama
. Ollama
injects its ext_server
as a sub-target of llama.cpp
, allowing ext_server
to inherit all the compilation settings from it.
yes, the box is not in-tree structure, so we can observe the compilation detail. I think we almost achieved the goal. can you add this and have a try?
diff --git a/llama-box/CMakeLists.txt b/llama-box/CMakeLists.txt
index 80dcd90..172e673 100644
--- a/llama-box/CMakeLists.txt
+++ b/llama-box/CMakeLists.txt
@@ -77,6 +77,16 @@ if (GGML_CANN)
${CANN_INSTALL_DIR}/runtime/lib64/stub)
endif ()
endif ()
+if (GGML_MUSA)
+ set(CMAKE_C_COMPILER clang)
+ set(CMAKE_C_EXTENSIONS OFF)
+ set(CMAKE_CXX_COMPILER clang++)
+ set(CMAKE_CXX_EXTENSIONS OFF)
+
+ set(GGML_CUDA ON)
+
+ list(APPEND GGML_CDEF_PUBLIC GGML_USE_MUSA)
+endif()
add_executable(${TARGET} main.cpp param.hpp ratelimiter.hpp utils.hpp)
add_dependencies(${TARGET} patch)
target_link_libraries(${TARGET} PRIVATE version common llava ${CMAKE_THREAD_LIBS_INIT})
based on your work, I tried it on my laptop, is it green?
Yes, I also tried switching the default C/CXX
compiler to Clang
locally, and the compilation finished without errors. However, it seems odd that GGML_MUSA
is used directly. Is this acceptable to you? 😂
Yes, I also tried switching the default
C/CXX
compiler toClang
locally, and the compilation finished without errors. However, it seems odd thatGGML_MUSA
is used directly. Is this acceptable to you? 😂
LGTM at present.
Cool! Please see the latest commit, thanks.
The MUSA build is now successful.
This PR enables building for MUSA (S4000).