After I setup the LD_LIBRARY_PATH variable as the instruction described, when I try to execute llama-cli from /build/bin, I encountered the following error:
CANNOT LINK EXECUTABLE "./llama-cli": cannot locate symbol "__emutls_get_address" referenced by "/data/data/com.termux/files/home/llama.cpp/build/ggml/src/libggml.so"...
After I unset the variable, I could execute the program again. But I'm not sure if this is using any GPU to accelerate the program.
What is going on by setting LD_LIBRARY_PATH to /vendor/lib64? And is there any alternative to use GPU other than setting this variable?
After I setup the
LD_LIBRARY_PATH
variable as the instruction described, when I try to executellama-cli
from /build/bin, I encountered the following error:After I unset the variable, I could execute the program again. But I'm not sure if this is using any GPU to accelerate the program.
What is going on by setting LD_LIBRARY_PATH to /vendor/lib64? And is there any alternative to use GPU other than setting this variable?