UOB-AI / UOB-AI.github.io

A repository to host our documentations website.
https://UOB-AI.github.io
1 stars 3 forks source link

Error running basic sample code for a GGUF LLM model from Hugging face #49

Closed EbrahimSadri closed 5 months ago

EbrahimSadri commented 5 months ago

I am facing an issue trying to run a basic sample LLM code from hugging face. The details are at the bottom.

The LLM is a quantized GGUF model, which can be found on https://huggingface.co/TheBloke/Orca-2-7B-GGUF.

I have tried running this sample code in a virtual environment but was unable to do so. I have also tried other GGUF models all of which resulted in the same error.

This same code and model were successfully run on a windows 10 laptop using GPU in a virtual environment.

The details are as follows:

- Job ID  -> 13600
- Partition -> GPU01
- Node -> 1          
- MODEL -> TheBloke/Orca-2-7B-GGUF
- MODEL_FILE -> orca-2-7b.Q4_K_M.gguf
- MODEL_TYPE -> llama

Code

main.py

from ctransformers import AutoModelForCausalLM

llm = AutoModelForCausalLM.from_pretrained(
    "TheBloke/Orca-2-7B-GGUF",
    model_file="orca-2-7b.Q4_K_M.gguf",
    model_type="llama",
    gpu_layers=50
)

print(llm("AI is going to"))

Error

Traceback (most recent call last):
  File "/home/nfs/202201045/TEST PROJECT/main.py", line 110, in <module>
    llm = AutoModelForCausalLM.from_pretrained(
  File "/home/nfs/202201045/.local/lib/python3.9/site-packages/ctransformers/hub.py", line 175, in from_pretrained
    llm = LLM(
  File "/home/nfs/202201045/.local/lib/python3.9/site-packages/ctransformers/llm.py", line 246, in __init__
    self._lib = load_library(lib, gpu=config.gpu_layers > 0)
  File "/home/nfs/202201045/.local/lib/python3.9/site-packages/ctransformers/llm.py", line 126, in load_library
    lib = CDLL(path)
  File "/data/software/miniconda3/lib/python3.9/ctypes/__init__.py", line 382, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by /home/nfs/202201045/.local/lib/python3.9/site-packages/ctransformers/lib/avx2/libctransformers.so)

Steps to reproduce error

1. open terminal / jupyter notebook
2. install library virtual environment library `pip install virtualenv`
3. create a new environment `python -m venv env`
4. activate environment (in terminal) `source env/bin/activate`
5. install ctransformers library for GPU 'pip install ctransformers[cuda]`
6. run above code `python main.py`
asubah commented 5 months ago

Hi, Thanks for the detailed issue! This helps us a lot in finding a solution ASAP. Our cluster is using glibc 2.28, until we migrate to rhel 9 compatible OS, the only feasible solution is to build the library using glibc 2.28. The good thing is pip makes it easy to build the library. Just run the following commands in your venv and it should work:

pip uninstall ctransformers
pip install ctransformers --no-binary ctransformers
EbrahimSadri commented 5 months ago

Hi,

Thank you. I was successfully able to run the code. I appreciate your help.