Open cplasfwst opened 3 days ago
My l4t version is 36.3.0
@cplasfwst The container llama_cpp:gguf-{L4T_VERSION}
with L4T_VERSION=36.3.0 is not available from that developer. Try L4T_VERSION=36.2.0
instead. See dusty-nv llama_cpp for more details.
Do you know what causes this?
@cplasfwst
Yes, it would seem like it’s having an issue with libcudart.so. First, on your Jetson run sudo find /usr/local/cuda -name ‘*libcudart*’ 2>/dev/null
Also for posterity you should run docker run -itu0 —rm dustynv/llama_cpp:gguf-r36.2.0 find /usr/local/cuda -name ‘*libcudart*’ 2>/dev/null
and paste the output from those please.
Why am I running like this?
Dear author, my device environment is like this. What should I do to run ollama normally? Please help me. Thank you very much!
@cplasfwst Yes, it would seem like it’s having an issue with libcudart.so. First, on your Jetson run
sudo find /usr/local/cuda -name ‘*libcudart*’ 2>/dev/null
Also for posterity you should run
docker run -itu0 —rm dustynv/llama_cpp:gguf-r36.2.0 find /usr/local/cuda -name ‘*libcudart*’ 2>/dev/null
and paste the output from those please.
When you are building Ollama, the sub-task to build an external library called llama_cpp is unable to properly import libcudart.so from your installed CUDA libraries. Please run the two commands I referenced in the quoted text so we can see if the libcudart.so file is present or not.
sudo find /usr/local/cuda -name ‘libcudart’ 2>/dev/null
I ran the command you mentioned and there was no output. In addition, I am not very familiar with the more in-depth problems. I am a beginner and I really want to use Jetson ATX OIN to run Ollama. Do you have time to remotely find the problem for me? I would be grateful and I can pay a certain fee.
I am not with my computer, I am using my cell phone to type to you, sorry.
Please try this command, it is similar but checks a different directory: sudo find /usr -name ‘*cudart*’ 2>/dev/null
and docker run -itu0 —rm dustynv/llama_cpp:gguf-r36.2.0 find /usr -name ‘*cudart*’ 2>/dev/null
libcudart
Thank you very much for your patience in answering my questions. Thank you very much! ! Thank you very much! ! I couldn't find libcudart using the command, but I found the directory of this so manually. Please see the picture below: After confirming that this so file exists, what should I do?
@cplasfwst Okay if there is no output then that might be a problem. Libcudart.so is a standard library released with the CUDA toolkit, which is required for almost all AI programs on Jetson. CUDA Runtime (cudart) is their user-friendly API. It should be installed by default with Jetpack 6.
This command will take longer to run but will scan your entire file system for the library in case I tried searching the wrong one previously.
sudo find / -name ‘*cudart*’ 2>/dev/null
if that returns nothing, try this (NEW EDIT: CUDA instead of cudart for apt list)
sudo apt list | grep -i install | grep -i cuda
if that still finds nothing, please check that you installed Jetpack correctly to your Orin AGX.
I checked the documentation and nVidia forums to make sure that the r36.2 containers work with r36.3, the main container engineer at nVidia confirmed it does work:
So the problem is the libcudart file is missing or moved, I hope your find commands will discover it.
I have encountered many problems now, which prevent me from using ollama. I don't know how to deal with it. I have been researching for a day. I really hope you can help me.
I’ve updated the docker file, it looks like Dusty-NV updated the llama cpp container tag to just be r36.2.0.
Can you also run ls -latr /usr/local/cuda/lib/libcud*
?
ls -latr /usr/local/cuda/lib/libcud* The result of running is this
In addition, I ran the build again on the Jetson ATX Orin device and got this prompt:
Thanks for your help again. I hope to run Ollama on Jetson ATX Orin. This problem has troubled me for many days.
But I tested ls -latr /usr/local/cuda/lib64/libcud* and it can appear
Thank you for running the search. I was able to find some suggestions on fixing the problem. It seems like the compiler can’t find libcuda.so
to link the CUDA functions after compiling.
Please run docker run -itu0 --rm dustynv/llama_cpp:r36.2.0 find /usr/lib -name ‘*libcuda.so*’ 2>/dev/null
I have added the path /usr/lib/aarch64-linux-gnu
to LD path in the Dockerfile. Try that
Hello author, I am glad that you have made contributions to Jetson users. I have a question I would like to ask
Some errors occurred when I used the command to build the docker image. Why?