Closed CHNtentes closed 2 weeks ago
Yeah, I think it happens on PC.
We added that as part of "Common Issues and Mitigations" section in https://github.com/pytorch/executorch/blob/main/examples/models/llama2/README.md#common-issues-and-mitigations
Are there any further actionable things to do with this github issue?
Thanks for your reply. I'll close this issue.
Are there any further actionable things to do with this github issue?
@mergennachin I built llama runner on pc successfully, but when I build it for android, it again gives me another similar error:
(executorch) v2x@v2x-OMEN-Desktop:/hdd_2/ltg/executorch$ cmake --build cmake-out-android/examples/models/llama2 -j16 --config Release [ 16%] Building CXX object runner/CMakeFiles/llama_runner.dir/runner.cpp.o
[ 16%] Building CXX object runner/CMakeFiles/llama_runner.dir/__/tokenizer/bpe_tokenizer.cpp.o
[ 25%] Building CXX object runner/CMakeFiles/llama_runner.dir/hdd_2/ltg/executorch/extension/evalue_util/print_evalue.cpp.o
[ 33%] Building CXX object runner/CMakeFiles/llama_runner.dir/hdd_2/ltg/executorch/kernels/optimized/blas/CPUBlas.cpp.o
[ 41%] Building CXX object runner/CMakeFiles/llama_runner.dir/__/sampler/sampler.cpp.o
[ 58%] Building CXX object custom_ops/CMakeFiles/custom_ops.dir/hdd_2/ltg/executorch/extension/parallel/thread_parallel.cpp.o
[ 58%] Building CXX object custom_ops/CMakeFiles/custom_ops.dir/op_sdpa.cpp.o
[ 66%] Linking CXX static library libllama_runner.a
[ 66%] Built target llama_runner
[ 75%] Linking CXX static library libcustom_ops.a
[ 75%] Built target custom_ops
[ 83%] Building CXX object CMakeFiles/llama_main.dir/main.cpp.o
[ 91%] Building CXX object CMakeFiles/llama_main.dir/hdd_2/ltg/executorch/backends/xnnpack/threadpool/cpuinfo_utils.cpp.o
[100%] Linking CXX executable llama_main
ld.lld: error: unable to find library -lpthread
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [CMakeFiles/llama_main.dir/build.make:133: llama_main] Error 1
make[1]: *** [CMakeFiles/Makefile2:119: CMakeFiles/llama_main.dir/all] Error 2
make: *** [Makefile:136: all] Error 2
I already add pthread to target link libraries in examples/models/llama2/CMakeLists.txt, and I don't know what to do now...
@kirklandsign
I actually found a temporary solution, just remove "-lpthread" from cmake-out-android/examples/models/llama2/CMakeFiles/llama_main.dir/link.txt and the build succeeded. It seems that the project does not require -lpthread after all, not sure why it was added though.
Hi @CHNtentes for this part, do you mean for Android or PC?
I actually found a temporary solution, just remove "-lpthread" from cmake-out-android/examples/models/llama2/CMakeFiles/llama_main.dir/link.txt and the build succeeded. It seems that the project does not require -lpthread after all, not sure why it was added though.
Hi @CHNtentes for this part, do you mean for Android or PC?
I actually found a temporary solution, just remove "-lpthread" from cmake-out-android/examples/models/llama2/CMakeFiles/llama_main.dir/link.txt and the build succeeded. It seems that the project does not require -lpthread after all, not sure why it was added though.
It is for Android. You can find it in filename.
It is for Android. You can find it in filename.
OK for android I didn't use -lpthread and it worked for me. Android has its own libc bionic so the behavior might be different.
All resolved?
Hi, When I follow the insrtuction here to build llama runner on pc, it throws an error:
I ask ChatGPT and it tells me to add 'pthread' to target_link_libraries in examples/models/llama2/CMakeLists.txt, and it solves the issue effectively. I just wonder if this issue is only happening on my pc?