Open smithlai opened 7 months ago
Thanks for pointing this out @smithlai, it should be fixed in https://github.com/dusty-nv/jetson-containers/commit/a637fc9eb8e169454890a0874aca617a81edbadc
OK, these are rebuilding now across all JP5/JP6 container versions and are being pushed to dockerhub 👍
Thank you dusty,
And there is one more issue there.
I tried to rebuild my container with ./build -n my_contatiner langchain
or ./build -n my_contatiner llama_cpp
but always failed here
Step 17/17 : RUN pip3 show llama-cpp-python | grep llama && python3 -c 'import llama_cpp' && python3 -m llama_cpp.server --help
---> Running in 9e276c3c1db9
Name: llama_cpp_python
Summary: Python bindings for the llama.cpp library
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/llama_cpp/llama_cpp.py", line 74, in _load_shared_library
return ctypes.CDLL(str(_lib_path), **cdll_args)
File "/usr/lib/python3.8/ctypes/__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libcuda.so.1: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/llama_cpp/__init__.py", line 1, in <module>
from .llama_cpp import *
File "/usr/local/lib/python3.8/dist-packages/llama_cpp/llama_cpp.py", line 87, in <module>
_lib = _load_shared_library(_lib_base_name)
File "/usr/local/lib/python3.8/dist-packages/llama_cpp/llama_cpp.py", line 76, in _load_shared_library
raise RuntimeError(f"Failed to load shared library '{_lib_path}': {e}")
RuntimeError: Failed to load shared library '/usr/local/lib/python3.8/dist-packages/llama_cpp/libllama.so': libcuda.so.1: cannot open shared object file: No such file or directory
This platform is Jetson AGX Orin
Hi @smithlai, did you set your default docker runtime to nvidia? This enables CUDA to be used during the builds.
https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md#docker-default-runtime
Hi @smithlai, did you set your default docker runtime to nvidia? This enables CUDA to be used during the builds.
https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md#docker-default-runtime
My apologies!
I did't noticed that link. Now everything goes better now.
I tried to run container langchain:samples with
./run.sh $(./autotag langchain:samples)
As the description in Readme, the langchain:sample is based on:
So there should be pytorch and llama_cpp and langchain in the container,
However...
in langchain:sample
but it works in langchain:r35.4.1
But
langchain:sample
is based onlangchain:r35.4.1
, it's weird thatlangchain:sample
don't have pytorch and langchain package, not mention to llama_cpp package. I cannot even successfully execute the default demoLangChain_Local-LLMs.ipynb