marella / ctransformers

Python bindings for the Transformer models implemented in C/C++ using GGML library.
MIT License
1.81k stars 138 forks source link

GLIBC_2.29' not found #58

Open Rajmehta123 opened 1 year ago

Rajmehta123 commented 1 year ago

I am using the Linux system with GPU and installed ctransformers-0.2.14 using pip. It installed all fine. But now when I try to run the GGML model quantized by @TheBloke (TheBloke/upstage-llama-30b-instruct-2048-GGML), I am getting the following error.

In fact, I run any model, I get the following error. I have CUDA 12.2.

from ctransformers import AutoModelForCausalLM
llm = AutoModelForCausalLM.from_pretrained('TheBloke/upstage-llama-30b-instruct-2048-GGML', model_type='llama')
print(llm('AI is going to'))

ERROR: OSError: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by /home/ec2-user/.conda/envs/summr/lib/python3.10/site-packages/ctransformers/lib/cuda/libctransformers.so)

rohithq commented 1 year ago

I am facing the same error

marella commented 1 year ago

Can you please share the OS version. Also please run the following command and share its output:

ldd --version

If you are running it on an EC2 instance, please share the instance type and AMI name. It looks related to https://repost.aws/questions/QUrXOioL46RcCnFGyELJWKLw/glibc-2-27-on-amazon-linux-2 where Amazon Linux 2 instances use older glibc versions. Based on the answers in above post, using Amazon Linux 2023 should solve this issue. If you would like to use it on your current system, you can try building it from source:

pip uninstall ctransformers # uninstall if already installed
CT_CUBLAS=1 pip install ctransformers --no-binary ctransformers
thistleknot commented 1 year ago

I'm having the same error but on oracle linux 8.

I understand that glibc is core to the operation of the OS, so without doing a complete distro upgrade, I'm not sure how to get by... I tried pip uninstall ctransformers # uninstall if already installed CT_CUBLAS=1 pip install ctransformers --no-binary ctransformers but I get the same error

WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
(textgen) [root@pve0 ctransformers]# python
Python 3.10.9 (main, Mar  8 2023, 10:47:38) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from ctransformers import AutoModelForCausalLM, AutoTokenizer
>>> model = AutoModelForCausalLM.from_pretrained("/data/text-generation-webui/models/Llama-2-7b-Chat-GGUF/", hf=True, model_file="llama-2-7b-chat.Q2_K.gguf")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/data/ctransformers/ctransformers/hub.py", line 175, in from_pretrained
    llm = LLM(
  File "/data/ctransformers/ctransformers/llm.py", line 246, in __init__
    self._lib = load_library(lib, gpu=config.gpu_layers > 0)
  File "/data/ctransformers/ctransformers/llm.py", line 126, in load_library
    lib = CDLL(path)
  File "/data/ubuntu_22.04_sandbox/root/miniconda3/envs/textgen/lib/python3.10/ctypes/__init__.py", line 374, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by /data/ctransformers/ctransformers/lib/avx2/libctransformers.so)
cmosguy commented 1 year ago

@thistleknot I am getting the same error - were you able to resolve this issue?

thistleknot commented 1 year ago

I did actually, just rn

`

glibc error

        pip uninstall ctransformers

        git clone https://github.com/marella/ctransformers

        cd ctransformers

        CMAKE_ARGS="-DCT_CUBLAS=ON -DCT_INSTRUCTIONS=avx" pip install .

`

cmosguy commented 1 year ago

@thistleknot I just tried this but I keep running into this error and I do not understand how to work around the issue: https://github.com/ggerganov/llama.cpp/issues/3398#issue-1919143756

It seems the llama.cpp library is causing problems for me here and i cannot get past the ability to compile and install on the 12.1 GPU, please advise.

cmosguy commented 1 year ago

@marella can you please help me understand how to get the fixes made in the llama.cpp repo to get updated here. It seems there is a PR that is fixed in that repo independently that needs to be fixed here too. https://github.com/ggerganov/llama.cpp/pull/3403

thistleknot commented 1 year ago

I have some instructions at home where I built the llama.cpp .so object and did an ln -s to link it to the default library path location.

I forgot the command, but it was something like make ... .so then replaced the llama.cpp that was installed/built by llama-cpp-python (not the exact same as your issue), but I suspect ctransformers is using llama.cpp on the backend as well, and this outside step that I took carried over to the prior instructions I sent you.

set your make [cu]blas env vars

make libllama.so

the location of the offending file presented itself when trying to run llama-cpp-python server, which would result in an invalid/illegal instruction and python would show the debug log as coming from this .so in a lib path.

You will likely need to find this lib path, delete it (it's reinstalled when you do a make [install], or you can reinstall via llama-cpp-python which builds).

Which is why I thought you simply needed to do a pip uninstall llama-cpp then install llama-cpp-python with cublas parm set.

Either way, that's where I would investigate.

Also, if your compute is below 6.0, you need to modify the makefile arch and specify the compute (hopefully that's not your issue, because I had to ask chatgpt that question and again my notes are at home)

cmosguy commented 1 year ago

@thistleknot thanks for your reply. I tried my best to understand your last comment, but feel like the root cause is simply not having GLIBC 2.29 installed?

thistleknot commented 1 year ago

I'm home now

    rm -rf /root/miniconda3/envs/llama-cpp/lib/python3.10/site-packages/llama_cpp_cuda
    LLAMA_CUBLAS=1 make libllama.so
    ln -s /data/text-generation-webui/llama.cpp /root/miniconda3/envs/llama-cpp/lib/python3.10/site-packages/llama_cpp_cuda
cmosguy commented 1 year ago

Thanks for the feedback, gosh that feels really hacky. Is there any other solution to fix the actual ctransformers package?

thistleknot commented 1 year ago

I'm not the maintainer, I was merely trying to help.

On Mon, Oct 2, 2023 at 5:55 PM Adam Klein @.***> wrote:

Thanks for the feedback, gosh that feels really hacky. Is there any other solution to fix the actual ctransformers package?

— Reply to this email directly, view it on GitHub https://github.com/marella/ctransformers/issues/58#issuecomment-1743995009, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHKKOSO5VHNJ2HN3ZUEGBTX5NPBPAVCNFSM6AAAAAA2TE6PCWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONBTHE4TKMBQHE . You are receiving this because you were mentioned.Message ID: @.***>

cmosguy commented 1 year ago

Hey @thistleknot - I do appreciate your help, understood, let me try your solutions 😀

AayushSameerShah commented 1 year ago

@cmosguy Hey! Any luck with any of the solutions? Please share 🤗

vishnua2j commented 10 months ago

I am facing the same error OSError: /lib64/libm.so.6: version `GLIBC_2.29' not found...Any solution ?

khanjandharaiya commented 10 months ago

@vishnua2j Try installing the library with this pip install ctransformers --no-binary ctransformers it solved the problem i was facing earlier.

vishnua2j commented 10 months ago

@khanjandharaiya

I tried running pip install ctransformers --no-binary ctransformers ..But I am still getting the same error. I am running AWS server

khanjandharaiya commented 10 months ago

@vishnua2j Try with this command "pip install ctransformers --no-binary ctransformers --no-cache-dir" it may solve your problem.

vishnua2j commented 10 months ago

@khanjandharaiya ..Thank you very much it worked

jstremme commented 10 months ago

@vishnua2j Try with this command "pip install ctransformers --no-binary ctransformers --no-cache-dir" it may solve your problem.

Absolute legend <3

cmosguy commented 10 months ago

this is interesting to note, but if you are working with a requirements.txt file how do you put this information in that?

Haizhuolaojisite commented 10 months ago

CMAKE_ARGS="-DCT_CUBLAS=ON -DCT_INSTRUCTIONS=avx" pip install .

This solved my problem on GCP!! Really saved my day! Thanks so much!