Closed Song367 closed 8 months ago
Same error here, running on Ubuntu 18.04.
Wouldn't remove llama-cpp-python break something?
is this an amd cpu?
not related to cpu.
Error solved by upgrading to gcc-11. Try that first.
Error solved by upgrading to gcc-11. Try that first.
That's what I did, and the error resolved.
Error solved by upgrading to gcc-11. Try that first.
That's what I did, and the error resolved.
Is your operating system centos?
Error solved by upgrading to gcc-11. Try that first.
That's what I did, and the error resolved.
gcc-11 not work.
upgrading gcc-11, did not work
i am getting the same error and gcc-11 doesn't do anything. Ubuntu 22.04 and Fedora 37
Same error here, running on Ubuntu 18.04.
Wouldn't remove llama-cpp-python break something?
Have you solved the problem? im also running on Ubuntu 18.04.
Updating to gcc-11 and g++-11 worked for me on Ubuntu 18.04.
Did that using sudo apt install gcc-11
and sudo apt install g++-11
.
Error solved by upgrading to gcc-11. Try that first.
Using it on windows WSL i had additionally make a few more installations:
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt update
sudo apt install gcc-11 g++-11
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 60 --slave /usr/bin/g++ g++ /usr/bin/g++-11
pip install --upgrade pip
pip install --upgrade setuptools wheel
sudo apt-get install build-essential
It was all done to install oobabooga on windows WSL. Here my complete list for a windows 10 NVIDIA System:
# Update Ubuntu packages
sudo apt update
sudo apt upgrade
# Download and install Miniconda
curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
bash Miniconda3.sh
rm Miniconda3.sh
# IMPORTANT - restart the terminal so it says (bash) in the beginning of the line
# Update conda, install wget, create and activate conda environment "textgen"
conda update conda
conda install wget
conda create -n textgen python=3.10.9
conda activate textgen
# Install CUDA libraries
pip3 install torch torchvision torchaudio
# Add PPA for gcc-11, update packages, install gcc-11, g++-11, update pip and setuptools, install build-essential
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt update
sudo apt install gcc-11 g++-11
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 60 --slave /usr/bin/g++ g++ /usr/bin/g++-11
pip install --upgrade pip
pip install --upgrade setuptools wheel
sudo apt-get install build-essential
# Clone and setup oobabooga text-generation-webui
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
pip install -r requirements.txt
# Final update of Ubuntu packages
sudo apt update
sudo apt upgrade
Updating to gcc-11 and g++-11 worked for me on Ubuntu 18.04.
Did that using
sudo apt install gcc-11
andsudo apt install g++-11
.
This should be the accepted solution.
gcc-11 alone would not work, it needs both gcc-11 and g++-11.
After installing the two above, run CXX=g++-11 CC=gcc-11 pip install -r requirements.txt
and it should work (at least for me).
nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48
(from pr #120)
nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48 (from https://github.com/imartinez/privateGPT/pull/120)
Thank you @itgoldman .
This worked on Windows (thanks chatgpt):
set "CMAKE_ARGS=-DLLAMA_OPENBLAS=on"
set "FORCE_CMAKE=1"
pip install llama-cpp-python --no-cache-dir
Worked for me on Ubuntu18.04
sudo apt install software-properties-common
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt install gcc-11 g++-11
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 90 --slave /usr/bin/g++ g++ /usr/bin/g++-11 --slave /usr/bin/gcov gcov /usr/bin/gcov-11
sudo apt-get update
pip install -r requirements.txt
this is work for me
@robicity with the save - build-essential was the package for me, but I also tried a few methods mentioned previously so they could help you:
sudo apt-get install build-essential
sudo apt-get install gcc-11 g++-11
gcc11 or 12, it doesn't matter I don't think. with those installed you can rerun your pip command
Hey everyone, I installed a fresh ubuntu and this sequence solved this issue:
Update apt package manager and change into home directory
sudo apt-get update && cd ~
Install pre-requisites
sudo apt install curl &&
sudo apt install cmake -y &&
sudo apt install python3-pip -y &&
pip3 install testresources # dependency for launchpadlib
Also gcc-11 and g++-11 need to be installed to overcome this llama-cpp-python compilation issue
sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test &&
sudo apt install -y gcc-11 g++-11 &&
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 60 --slave /usr/bin/g++ g++ /usr/bin/g++-11 &&
pip3 install --upgrade pip &&
pip3 install --upgrade setuptools wheel &&
sudo apt-get install build-essential &&
gcc-11 --version # check if gcc works
Download the WebUI installer from repository and unpack it
wget https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_linux.zip &&
unzip oobabooga_linux.zip &&
rm oobabooga_linux.zip
change into the downloaded folder and run the installer, this will download the necessary files etc. into a single folder
cd oobabooga_linux &&
bash start_linux.sh
Hope this helps!
Hey everyone, I installed a fresh ubuntu and this sequence solved this issue:
Update apt package manager and change into home directory
sudo apt-get update && cd ~
Install pre-requisites
sudo apt install curl && sudo apt install cmake -y && sudo apt install python3-pip -y && pip3 install testresources # dependency for launchpadlib
Also gcc-11 and g++-11 need to be installed to overcome this llama-cpp-python compilation issue
sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test && sudo apt install -y gcc-11 g++-11 && sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 60 --slave /usr/bin/g++ g++ /usr/bin/g++-11 && pip3 install --upgrade pip && pip3 install --upgrade setuptools wheel && sudo apt-get install build-essential && gcc-11 --version # check if gcc works
Download the WebUI installer from repository and unpack it
wget https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_linux.zip && unzip oobabooga_linux.zip && rm oobabooga_linux.zip
change into the downloaded folder and run the installer, this will download the necessary files etc. into a single folder
cd oobabooga_linux && bash start_linux.sh
Hope this helps!
Perfect, flawless. Someone needs to add this to the docs
Same issue here in KDE Neon with GCC 11.3.0 and G++ 11.3.0, and also in Manjaro with GCC 12.2.x. In Manjaro oobabooga complained that it could find GCC 9 compiler. None of the solutions in this thread nor in the oobabooga Reddit thread titled 'Failed building wheel for llama-cpp-python' worked.
Curiously, I had no problem rolling oobabooga with all wheels attached in Linux Mint 21.1. I don't remember which compiler version is in Linux Mint 21.1, probably GCC 11.3.0. I did not have to jump through any hoops nor whisper sacred incantations while shaking a chicken foot and turning around three times with my eyes closed.
I don't think anyone really got to the bottom of this Llama-cpp-python wheel failure issue in a systematic way, especially when one Debian derivative works (Linux Mint) and another Debian variant (KDE Neon) does not.
15:49 - Edited to correct a typo and improve legibility.
godbless you this worked
nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48 (from imartinez/privateGPT#120)
Thank you @itgoldman . This worked on Windows (thanks chatgpt):
set "CMAKE_ARGS=-DLLAMA_OPENBLAS=on" set "FORCE_CMAKE=1" pip install llama-cpp-python --no-cache-dir
Updating to gcc-11 and g++-11 worked for me on Ubuntu 18.04. Did that using
sudo apt install gcc-11
andsudo apt install g++-11
.This should be the accepted solution. gcc-11 alone would not work, it needs both gcc-11 and g++-11. After installing the two above, run
CXX=g++-11 CC=gcc-11 pip install -r requirements.txt
and it should work (at least for me).
It works for me, thank you
First, install
conda install -c conda-forge cxx-compiler
And then try running pip install llama-cpp-python==0.1.48
It worked for me. As it will pick c++ compiler from conda instead of root machine. So, without changing compiler version you will able to install lamma
For Windows:
atharvapatiil it worked! thanks
Same issue here in KDE Neon with GCC 11.3.0 and G++ 11.3.0, and also in Manjaro with GCC 12.2.x. In Manjaro oobabooga complained that it could find GCC 9 compiler. None of the solutions in this thread nor in the oobabooga Reddit thread titled 'Failed building wheel for llama-cpp-python' worked.
I am reporting back with success compiling and running oobabooga today in all three of my Linux distros – for the first time ever! These distros include: Linux Mint 21.1 (stable), KDE Neon (rolling plasma), and Manjaro KDE (rolling Arch). There are no more wheels problems and no more safetensors problems. All 13B models tested perfectly.
Well done!
Updating to gcc-11 and g++-11 worked for me on Ubuntu 18.04.
Did that using
sudo apt install gcc-11
andsudo apt install g++-11
.
Thanks. This worked for me in Fedora 38. In Fedora, installing nvidia cuda driver toolkit has gcc in the pre-req.
sudo dnf install gcc-13
tab to autofill
sudo dnf install gcc-c++.13
tab to autofill
then ran one-click again and it flew straight through to completion.
Sorry guys, I am also getting the error of " Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects"
I am on windows 11, is there any prerequisite installation to get rid of this error?
In my case the problem was related to the Ubuntu CUDA settings, I fixed it by setting up the CUDACXX param as follows:
CUDACXX=/usr/local/cuda/bin/nvcc CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
Pain, it's been days and I can't solve it, win10
I am not a Windoze guy, but did you try uninstalling and reinstalling gcc and gcc-c++ and the libraries listed in start_linux.sh? I suspect damaged libraries may be the issue at least for some of us. [Please correct me if this hypothesis is wrong.]
If all else fails, you can always try kicking the tires (a bad wheels joke) or, more practically, install one of the linux flavors. I recommend Linux Mint or _PopOS! for a first distro. LM is said to be the most Windows-like and Mac OS-like of the distros. Doing it in a VM is risk free, if you have enough storage capacity (mostly for the quantized AI model) but it will be slow. I cannot attest to Pop_OS for ooba – never tried it – but LM installation is straightforward, and there are no wheels problems.
G'luck.
I reinstalled gcc and I solved the problem of building llama-cpp-python, but I still can't use the GPU, same issue: https://github.com/oobabooga/text-generation-webui/issues/2782
In my case the problem was related to the Ubuntu CUDA settings, I fixed it by setting up the CUDACXX param as follows:
CUDACXX=/usr/local/cuda/bin/nvcc CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
Sorry, i missed out to mark my OS. I am on Win 11. It would be helpful if you could provide prerequisites for windows in case any. like Visual Studio, cmake
I reinstalled gcc and I solved the problem of building llama-cpp-python, but I still can't use the GPU, same issue: #2782
Have you tried a GPTQ model? (For example: this one)
First, install
conda install -c conda-forge cxx-compiler
And then try runningpip install llama-cpp-python==0.1.48
It worked for me. As it will pick c++ compiler from conda instead of root machine. So, without changing compiler version you will able to install lamma
Great thanks to https://github.com/oobabooga/text-generation-webui/issues/1534#issuecomment-1590614539 from @atharvapatiil !
This works on Ubuntu 18.04.6
:
conda install cxx-compiler
pip install llama-cpp-python==0.1.48
And you need to comment the related line in the requirements.txt
after installing llama-cpp-python==0.1.48
seperately:
# llama-cpp-python==0.1.66; platform_system != "Windows"
Then continue to run pip install -r requirements
would work well.
nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48 (from imartinez/privateGPT#120)
Thank you @itgoldman . This worked on Windows (thanks chatgpt):
set "CMAKE_ARGS=-DLLAMA_OPENBLAS=on" set "FORCE_CMAKE=1" pip install llama-cpp-python --no-cache-dir
It's still not working for me
Same issue on Windows 10. Failed to build wheel with cuBLAS.
Here is how I made it work.
Building wheel for llama-cpp-python (pyproject.toml) ... done Created wheel for llama-cpp-python: filename=llama_cpp_python-0.1.68-cp310-cp310-win_amd64.whl size=556525 sha256=19e69fe8446d27a80796085752ebaf5a9c8bd1471c1b1f15aed47709858b6ad9 Stored in directory: C:\Users...\AppData\Local\Temp\pip-ephem-wheel-cache-hy7n1udj\wheels\df\f2\fb\b8153a244ace60fa4759cbd3d4881a2132b71e0e894ed6f29b Successfully built llama-cpp-python Installing collected packages: llama-cpp-python Successfully installed llama-cpp-python-0.1.68
I was bashing my head against the wall with this. For anyone rocking Manjaro.
Install gcc-11 with:
yay -S gcc11
Press 'N' for both prompts before the installations (unless you like rebuilding the packages and waiting for days).
Run the the installation with the following args:
CMAKE_ARGS="-DLLAMA_CUBLAS=on" NVCC_PREPEND_FLAGS='-ccbin /usr/bin/g++-11' FORCE_CMAKE=1 CXX=g++-11 CC=gcc-11 pip install llama-cpp-python --no-cache-dir
Where -ccbin
should point to your g++-11 installation folder (with yay
it will probably be the same as mine).
???
Profit.
P.S
I also edited the nvcc.profile
because setting CC=gcc-11 CXX=g++-11
was enough to satisfy most of the build, but nvcc
required some special attention.
From what I've read it's a bit of a "No No" and I'm not sure it actually helped in this situation so thread lightly~
(The edit was adding the gcc-11 and g++-11 paths to the PATH
variable in nvcc.profile
).
Cheers~
In my case the problem was related to the Ubuntu CUDA settings, I fixed it by setting up the CUDACXX param as follows:
CUDACXX=/usr/local/cuda/bin/nvcc CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
This did the trick for me on Arch, using the system nvcc
instead of the one from Conda.
First, install
conda install -c conda-forge cxx-compiler
And then try runningpip install llama-cpp-python==0.1.48
It worked for me. As it will pick c++ compiler from conda instead of root machine. So, without changing compiler version you will able to install lamma
thanks,it work
@robicity with the save - build-essential was the package for me, but I also tried a few methods mentioned previously so they could help you:
sudo apt-get install build-essential
sudo apt-get install gcc-11 g++-11
gcc11 or 12, it doesn't matter I don't think. with those installed you can rerun your pip command
On a clean install of Ubuntu 22.04 LTS, just adding sudo apt-get install build-essential
was enough for me. On my recent 22.04 (installed July 2023), gcc was already version 11.3.0
I ran the ./start_linux.sh script
after the first error and that failed to reinstall (probably because enough of it was installed by the time of the llama error that the start script no longer worked).
In lieu of deleting all the stuff and starting over, I just ran update_linux.sh
instead and it picked up from where it died the first time and appears to be running correctly.
I solved this on Centos7 by those:
This worked for me on ubuntu18.04
sudo apt install build-essential manpages-dev software-properties-common
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt update && sudo apt install gcc-11 g++-11
https://stackoverflow.com/questions/67298443/when-gcc-11-will-appear-in-ubuntu-repositories
@robicity with the save - build-essential was the package for me, but I also tried a few methods mentioned previously so they could help you:
sudo apt-get install build-essential
sudo apt-get install gcc-11 g++-11
gcc11 or 12, it doesn't matter I don't think. with those installed you can rerun your pip commandOn a clean install of Ubuntu 22.04 LTS, just adding
sudo apt-get install build-essential
was enough for me. On my recent 22.04 (installed July 2023), gcc was already version 11.3.0I ran the
./start_linux.sh script
after the first error and that failed to reinstall (probably because enough of it was installed by the time of the llama error that the start script no longer worked).In lieu of deleting all the stuff and starting over, I just ran
update_linux.sh
instead and it picked up from where it died the first time and appears to be running correctly.
@filmo this worked like a charm! Thank you
nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48 (from imartinez/privateGPT#120)
Thank you @itgoldman . This worked on Windows (thanks chatgpt):
set "CMAKE_ARGS=-DLLAMA_OPENBLAS=on" set "FORCE_CMAKE=1" pip install llama-cpp-python --no-cache-dir
In addition to this, I added following in the environment (.bashr
or .zshr
), to successfully install llama-cpp-python
in ubuntu 22.04
export CUDA_HOME=/usr/local/cuda-12.2
export PATH=${CUDA_HOME}/bin:${PATH}
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH
Nice bro @syedhabib53, this work from my side
g++11 works for me.
Before pip install -r requirement
, try something like
export CC=/usr/bin/gcc export CXX=/usr/bin/g++
For Ubuntu 22.04.2
Had to do following which worked for me
sudo apt update
sudo apt-get install build-essential
sudo apt-get install ninja-build
pip install -r requirements.txt
Describe the bug
install llama display error ERROR: Failed building wheel for llama-cpp-python
Is there an existing issue for this?
Reproduction
Screenshot
No response
Logs
System Info