ggerganov / llama.cpp

LLM inference in C/C++
MIT License
64.62k stars 9.25k forks source link

Bug: `-fPIC` compiler flag missing in cmake build? #8028

Closed uwu-420 closed 1 day ago

uwu-420 commented 2 months ago

What happened?

In the Makefile the compiler flag -fPIC is used but in CMakeLists.txt it is not. Would it make sense to add that flag for CMake builds as well? I didn't notice this on MacOS but when compiling library code on Linux I'm getting errors like this: /usr/bin/ld: common/libcommon.a(common.cpp.o): relocation R_X86_64_PC32 against symbol `stdout@@GLIBC_2.2.5' can not be used when making a shared object; recompile with -fPIC

Name and Version

version: 3169 (781eb37f) built with Apple clang version 15.0.0 (clang-1500.0.40.1) for arm64-apple-darwin23.5.0

What operating system are you seeing the problem on?

No response

Relevant log output

No response

sztejkat commented 1 month ago

Can confirm. Commit hash: 68504f0970db5a3602d176953690f503059906b1 OS: kubuntu

Compilation command line:

HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" \ cmake -S . -B build -DGGML_HIPBLAS=ON -DAMDGPU_TARGETS=gfx1030 -DCMAKE_BUILD_TYPE=Release \ && cmake --build build --config Release -- -j 12 CMake Warning at CMakeLists.txt:95 (message): LLAMA_NATIVE is deprecated and will be removed in the future.

Use GGML_NATIVE instead

Call Stack (most recent call first): CMakeLists.txt:105 (llama_option_depr)

-- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found OpenMP: TRUE (found version "4.5")
-- OpenMP found -- Using llamafile -- The HIP compiler identification is Clang 17.0.0 -- Detecting HIP compiler ABI info -- Detecting HIP compiler ABI info - done -- Check for working HIP compiler: /opt/rocm-6.0.2/llvm/bin/clang - skipped -- Detecting HIP compile features -- Detecting HIP compile features - done -- HIP and hipBLAS found -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Configuring done (1.2s) -- Generating done (0.1s) -- Build files have been written to: /home/sztejkat/ai/llama2/soft/llama.cpp/build [ 0%] Generating build details from Git [ 1%] Building C object examples/gguf-hash/CMakeFiles/xxhash.dir/deps/xxhash/xxhash.c.o [ 2%] Building C object examples/gguf-hash/CMakeFiles/sha256.dir/deps/sha256/sha256.c.o [ 2%] Building C object examples/gguf-hash/CMakeFiles/sha1.dir/deps/sha1/sha1.c.o -- Found Git: /usr/bin/git (found version "2.40.1") (...........) [ 43%] Built target llama-bench-matmult [ 43%] Linking CXX executable ../../bin/llama-quantize-stats /usr/bin/ld: ../../ggml/src/libggml.a(ggml-cuda.cu.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a PIE object; recompile with -fPIE /usr/bin/ld: failed to set dynamic section sizes: bad value collect2: error: ld returned 1 exit status gmake[2]: [examples/quantize-stats/CMakeFiles/llama-quantize-stats.dir/build.make:108: bin/llama-quantize-stats] Error 1 gmake[1]: [CMakeFiles/Makefile2:3174: examples/quantize-stats/CMakeFiles/llama-quantize-stats.dir/all] Error 2 gmake[1]: *** Waiting for unfinished jobs....

github-actions[bot] commented 1 day ago

This issue was closed because it has been inactive for 14 days since being marked as stale.