dusty-nv / jetson-containers

Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
MIT License
2.09k stars 435 forks source link

Dependencies not works in langchain:sample #367

Open smithlai opened 7 months ago

smithlai commented 7 months ago

I tried to run container langchain:samples with ./run.sh $(./autotag langchain:samples)

As the description in Readme, the langchain:sample is based on:

Dependencies build-essential cuda cudnn python tensorrt numpy cmake onnx pytorch huggingface_hub llama_cpp langchain rust jupyterlab

So there should be pytorch and llama_cpp and langchain in the container,

However...

in langchain:sample

python3 -c 'import torch'                                                                                                                                                                  Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'torch'

python3 -c 'import langchain'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'langchain'

but it works in langchain:r35.4.1

python3 -c 'import torch; print(f"PyTorch version: {torch.__version__}");'
PyTorch version: 2.0.0+nv23.05
python3 -c 'import langchain;print(f"LC version: {langchain.__version__}");'
LC version: 0.0.350

But langchain:sample is based on langchain:r35.4.1, it's weird that langchain:sample don't have pytorch and langchain package, not mention to llama_cpp package. I cannot even successfully execute the default demo LangChain_Local-LLMs.ipynb

ModuleNotFoundError                       Traceback (most recent call last)
Cell In[1], line 1
----> 1 from langchain.llms import LlamaCpp
      2 from langchain.callbacks.manager import CallbackManager
      3 from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

ModuleNotFoundError: No module named 'langchain'
dusty-nv commented 7 months ago

Thanks for pointing this out @smithlai, it should be fixed in https://github.com/dusty-nv/jetson-containers/commit/a637fc9eb8e169454890a0874aca617a81edbadc

dusty-nv commented 7 months ago

OK, these are rebuilding now across all JP5/JP6 container versions and are being pushed to dockerhub 👍

smithlai commented 7 months ago

Thank you dusty,

And there is one more issue there. I tried to rebuild my container with ./build -n my_contatiner langchain or ./build -n my_contatiner llama_cpp but always failed here

Step 17/17 : RUN pip3 show llama-cpp-python | grep llama &&     python3 -c 'import llama_cpp' &&     python3 -m llama_cpp.server --help
 ---> Running in 9e276c3c1db9
Name: llama_cpp_python
Summary: Python bindings for the llama.cpp library
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/llama_cpp/llama_cpp.py", line 74, in _load_shared_library
    return ctypes.CDLL(str(_lib_path), **cdll_args)
  File "/usr/lib/python3.8/ctypes/__init__.py", line 373, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: libcuda.so.1: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.8/dist-packages/llama_cpp/__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "/usr/local/lib/python3.8/dist-packages/llama_cpp/llama_cpp.py", line 87, in <module>
    _lib = _load_shared_library(_lib_base_name)
  File "/usr/local/lib/python3.8/dist-packages/llama_cpp/llama_cpp.py", line 76, in _load_shared_library
    raise RuntimeError(f"Failed to load shared library '{_lib_path}': {e}")
RuntimeError: Failed to load shared library '/usr/local/lib/python3.8/dist-packages/llama_cpp/libllama.so': libcuda.so.1: cannot open shared object file: No such file or directory

This platform is Jetson AGX Orin

dusty-nv commented 7 months ago

Hi @smithlai, did you set your default docker runtime to nvidia? This enables CUDA to be used during the builds.

https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md#docker-default-runtime

smithlai commented 7 months ago

Hi @smithlai, did you set your default docker runtime to nvidia? This enables CUDA to be used during the builds.

https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md#docker-default-runtime

My apologies!

I did't noticed that link. Now everything goes better now.