Closed oobabooga closed 1 year ago
Hmmm. I just went through this process. I'll try to see if I find anything
Ohhhh, it looks like when they are installing the llama_cpp
package we are also getting the llama.cpp
repo in order to build the shared lib. But not everyone has their build environment configured, so it fails. It spams a bunch about visual studio, but the real issue is here:
-- The C compiler identification is unknown
CMake Error at CMakeLists.txt:3 (ENABLE_LANGUAGE):
No CMAKE_C_COMPILER could be found.
@abetlen do you agree?
Can you include prebuilt?
Like pre-built binaries?
The issue is that the binaries will likely not be built with the correct optimizations for the users particular CPU which will likely result in much worse performance than the user expects.
What's the process for setting up the build environment on windows, could we add this to the docs?
The compilers
conda package includes a c compiler, but I'm unsure if it is enough to compile on Windows. When I tried it, it detected my VS Build Tools installation and used that instead. Can someone without Visual Studio try it? I don't have the time right now to set up a VM.
I saw this, which might explain why it's going for visual studio. I don't know an easy way to get people setup with a way to build it though.
Yes, there is a Python package called setuptools that includes an extension called setuptools.msvc, which can help locate and configure the Microsoft Visual C++ Build Tools automatically when building extension modules on Windows. However, it's important to note that setuptools.msvc can only help configure the build environment if the Microsoft Visual C++ Build Tools are already installed on the system. It does not install the build tools for the user. Therefore, users still need to install the build tools themselves as described in the previous response.
I see a lot of stuff about using 'wheels' for different systems. But I know basically nothing about that haha. I know lots of people use the prebuilt binaries from releases on llama.cpp, but getting them setup on windows for building isn't quick. Sometimes people aren't able to run the binaries (which in our case would be the lib). 🤔
If we don't figure out a better solution maybe we can include a base lib that's compatible with many systems and either fall back to that if not able to build, or just use it and tell the user that they might be able to get better performance by compiling it on their machine.
I think I'm just going to resolve this by adding a link to install instructions for MSVC C++ or the conda compilers package. I worry that with the pre-built wheels there'll either be bugs to worry about or the issues will get flooded with performance complaints.
@oobabooga can you try running micromamba install compilers -c conda-forge
and then installing the llama-cpp-python
package?
okay, true hahaha
Actually on second thought we could do wheels and just upload them to the Github releases section if that's what you meant. That way the default pip install llama-cpp-python
can remain a source-only package and if anyone wants a wheel install for whatever reason they can pip install https://github.com/abetlen/llama-cpp-python/releases/download/release-name/llama-cpp-python.whl
.
I didn't really have a solution in mind haha. I just know from the gpt4all discord that lots of users aren't devs, so they like downloads. That seems like a good plan 👍
@jllllll @abetlen same error, but this time it only complained about VS 2019 and 2022 instead of every version since 1998
(C:\Users\me\Downloads\oobabooga-windows\oobabooga-windows\installer_files\env) C:\Users\me\Downloads\oobabooga-windows\oobabooga-windows>.\installer_files\mamba\micromamba.exe install compilers -c conda-forge
__
__ ______ ___ ____ _____ ___ / /_ ____ _
/ / / / __ `__ \/ __ `/ __ `__ \/ __ \/ __ `/
/ /_/ / / / / / / /_/ / / / / / / /_/ / /_/ /
/ .___/_/ /_/ /_/\__,_/_/ /_/ /_/_.___/\__,_/
/_/
conda-forge/win-64 Using cache
conda-forge/noarch Using cache
Pinned packages:
- python 3.10.*
Transaction
Prefix: C:\Users\me\Downloads\oobabooga-windows\oobabooga-windows\installer_files\env
All requested packages already installed
Transaction starting
Transaction finished
(C:\Users\me\Downloads\oobabooga-windows\oobabooga-windows\installer_files\env) C:\Users\me\Downloads\oobabooga-windows\oobabooga-windows>pip install llama-cpp-python==0.1.23
Collecting llama-cpp-python==0.1.23
Using cached llama_cpp_python-0.1.23.tar.gz (530 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in c:\users\me\downloads\oobabooga-windows\oobabooga-windows\installer_files\env\lib\site-packages (from llama-cpp-python==0.1.23) (4.5.0)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [49 lines of output]
--------------------------------------------------------------------------------
-- Trying 'Visual Studio 16 2019 x64 v142' generator
--------------------------------
---------------------------
----------------------
-----------------
------------
-------
--
Not searching for unused variables given on the command line.
CMake Error at CMakeLists.txt:2 (PROJECT):
Generator
Visual Studio 16 2019
could not find any instance of Visual Studio.
-- Configuring incomplete, errors occurred!
--
-------
------------
-----------------
----------------------
---------------------------
--------------------------------
-- Trying 'Visual Studio 16 2019 x64 v142' generator - failure
--------------------------------------------------------------------------------
********************************************************************************
scikit-build could not get a working generator for your system. Aborting build.
Building windows wheels for Python 3.10 requires Microsoft Visual Studio 2022.
Get it with "Visual Studio 2017":
https://visualstudio.microsoft.com/vs/
Or with "Visual Studio 2019":
https://visualstudio.microsoft.com/vs/
Or with "Visual Studio 2022":
https://visualstudio.microsoft.com/vs/
********************************************************************************
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
@oobabooga Yeah, we'll either have to wait for abetlen to upload wheels that the install script can download, or I can make a wheel for us to use in the mean time. As long as the wheel is installed before installing requirements.txt, then you can add llama-cpp-python back into it.
@jllllll working on publishing the wheels through Github releases, you should then be able to add the wheel urls to a requirements file.
If a user then wants a more optimized version and they have a c compiler installed they can just pip install --upgrade --no-deps --force-reinstall llama-cpp-python
. Does that work?
@jllllll @oobabooga when one of you gets the chace can you try to install a wheel from the artifacts built here https://github.com/abetlen/llama-cpp-python/actions/runs/4636312659
Sorry there's like a million of them, probably don't need to build for each version of python independently. I'll try to clean this up tomorrow. Cheers
@abetlen Works great!
I'll preface this with: I have no idea what I'm doing.
forked this repo, added the 310 whl into a /wheels/ folder (removed from gitignore) it and tried integrating the whl as is done with @jllllll 's whl for bitsandbytes as such.
@rem clone the repository and install the pip requirements
if exist text-generation-webui\ (
cd text-generation-webui
git pull
) else (
git clone https://github.com/oobabooga/text-generation-webui.git
call python -m pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.37.2-py3-none-any.whl
call python -m pip install https://github.com/Loufe/llama-cpp-python/tree/main/wheels/llama_cpp_python-0.1.25-cp310-cp310-win_amd64.whl
cd text-generation-webui || goto end
)
Getting an validity error, what am I missing?
@Loufe Must be something wrong with that wheel. Try one from here: https://github.com/abetlen/llama-cpp-python/suites/12105838837/artifacts/637822425
Extract llama_cpp_python-0.1.26-cp310-cp310-win_amd64.whl
This is from the latest wheel build, and it works well for me.
@Loufe Must be something wrong with that wheel. Try one from here: https://github.com/abetlen/llama-cpp-python/suites/12105838837/artifacts/637822425
Extract
llama_cpp_python-0.1.26-cp310-cp310-win_amd64.whl
This is from the latest wheel build, and it works well for me.
So I guess to clarify, it works great running directly from powershell in Windows. Trying to figure out why the one-click install for Ooba seems to dislike it. I had originally pulled from the same pack using the 0.1.25. I tried the 0.1.26, no luck still.
@Loufe I am currently working on a Powershell-based one-click-installer. I have a basic one here: https://github.com/jllllll/one-click-installers/tree/oobabooga-windows-powershell
Currently, only the installer itself uses Powershell. I am working on an overhaul to it that will replace all of the batch scripts with Powershell ones. Powershell is much easier to work with compared to CMD. CMD has tons of inconsistencies and bugs.
@jllllll Fantastic! I've got some experience with Powershell, I might pop by and offer a hand if I have some spare time.
For the moment I'm determined to get the wheel working with the batch.
FYI, I got (almost) the same errors on virgin (fully patched) KUBUNTU install today, was able to fix using
$ sudo apt-get install build-essential python3-venv
Here are the errors prior to the apt-get
above-
$ pip install --upgrade --no-deython
Collecting llama-cpp-python
Using cached llama_cpp_python-0.1.27.tar.gz (529 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [72 lines of output]
--------------------------------------------------------------------------------
-- Trying 'Ninja' generator
--------------------------------
---------------------------
----------------------
-----------------
------------
-------
--
Not searching for unused variables given on the command line.
-- The C compiler identification is unknown
CMake Error at CMakeLists.txt:3 (ENABLE_LANGUAGE):
No CMAKE_C_COMPILER could be found.
Tell CMake where to find the compiler by setting either the environment
variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
the compiler, or to the compiler name if it is in the PATH.
-- Configuring incomplete, errors occurred!
--
-------
------------
-----------------
----------------------
---------------------------
--------------------------------
-- Trying 'Ninja' generator - failure
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
-- Trying 'Unix Makefiles' generator
--------------------------------
---------------------------
----------------------
-----------------
------------
-------
--
CMake Error: CMake was unable to find a build program corresponding to "Unix Makefiles". CMAKE_MAKE_PROGRAM is not set. You probably need to select a different build tool.Not searching for unused variables given on the command line.
-- Configuring incomplete, errors occurred!
--
-------
------------
-----------------
----------------------
---------------------------
--------------------------------
-- Trying 'Unix Makefiles' generator - failure
--------------------------------------------------------------------------------
********************************************************************************
scikit-build could not get a working generator for your system. Aborting build.
Building Linux wheels for Python 3.10 requires a compiler (e.g gcc).
But scikit-build does *NOT* know how to install it on ubuntu
To build compliant wheels, consider using the manylinux system described in PEP-513.
Get it with "dockcross/manylinux-x64" docker image:
https://github.com/dockcross/dockcross#readme
For more details, please refer to scikit-build documentation:
http://scikit-build.readthedocs.io/en/latest/generators.html#linux
********************************************************************************
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
I'll try adding build essentials in lieu of the wheel in Windows and report back a little later. Would simplify things immensely if we can avoid the wheel.
On Apr 9, 2023 at 7:10 p.m., Saqib Ali Khan @.***> wrote:
FYI, I got (almost) the same errors on virgin (fully patched) KUBUNTU install today, was able to fix using
$ sudo apt-get install build-essential python3-venv
Here are the errors prior to the apt-get above-
$ pip install --upgrade --no-deython Collecting llama-cpp-python Using cached llama_cpp_python-0.1.27.tar.gz (529 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [72 lines of output] -------------------------------------------------------------------------------- -- Trying 'Ninja' generator -------------------------------- --------------------------- ---------------------- ----------------- ------------ ------- -- Not searching for unused variables given on the command line. -- The C compiler identification is unknown CMake Error at CMakeLists.txt:3 (ENABLE_LANGUAGE): No CMAKE_C_COMPILER could be found. Tell CMake where to find the compiler by setting either the environment variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to the compiler, or to the compiler name if it is in the PATH. -- Configuring incomplete, errors occurred! -- ------- ------------ ----------------- ---------------------- --------------------------- -------------------------------- -- Trying 'Ninja' generator - failure -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -- Trying 'Unix Makefiles' generator -------------------------------- --------------------------- ---------------------- ----------------- ------------ ------- -- CMake Error: CMake was unable to find a build program corresponding to "Unix Makefiles". CMAKE_MAKE_PROGRAM is not set. You probably need to select a different build tool.Not searching for unused variables given on the command line. -- Configuring incomplete, errors occurred! -- ------- ------------ ----------------- ---------------------- --------------------------- -------------------------------- -- Trying 'Unix Makefiles' generator - failure -------------------------------------------------------------------------------- **** scikit-build could not get a working generator for your system. Aborting build. Building Linux wheels for Python 3.10 requires a compiler (e.g gcc). But scikit-build does NOT know how to install it on ubuntu To build compliant wheels, consider using the manylinux system described in PEP-513. Get it with "dockcross/manylinux-x64" docker image: https://github.com/dockcross/dockcross#readme For more details, please refer to scikit-build documentation: http://scikit-build.readthedocs.io/en/latest/generators.html#linux **** [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
@Free-Radical A C compiler is needed to compile llama-cpp-python. That is what build-essential includes.
@Loufe The wheel for Windows is mostly for convenience. Many users do not want to have to install Visual Studio to run oobabooga's webui. On top of that, Windows Python can be pretty unreliable in detecting and utilizing Visual Studio to compile. It took me 4 days straight to get packages to compile on my system. In the end, it just randomly started working for no discernible reason.
Even with Visual Studio Code 2022 and C++ buildtools installed, I got a compile error.
workaround found thanks to replies above: python -m pip install https://github.com/Loufe/llama-cpp-python/raw/main/wheels/llama_cpp_python-0.1.26-cp310-cp310-win_amd64.whl --no-deps
This was the inital error.
RC Pass 1: command "rc /fo CMakeFiles\cmTC_bd7f0.dir/manifest.res CMakeFiles\cmTC_bd7f0.dir/manifest.rc" failed (exit code 0) with the following output: The system cannot find the file specifiedNMAKE : fatal error U1077: 'C:\Users\Adam\AppData\Local\Temp\pip-build-env-h3l4otla\overlay\Lib\site-packages\cmake\data\bin\cmake.exe -E vs_link_exe --intdir=CMakeFiles\cmTC_bd7f0.dir --rc=rc --mt=CMAKE_MT-NOTFOUND --manifests -- C:\PROGRA~1\MIB055~1\2022\PROFES~1\VC\Tools\MSVC\1435~1.322\bin\Hostx86\x64\link.exe /nologo @CMakeFiles\cmTC_bd7f0.dir\objects1.rsp @C:\Users\Adam\AppData\Local\Temp\nm85B6.tmp' : return code '0xffffffff' Stop. NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.35.32215\bin\HostX86\x64\nmake.exe" -f CMakeFiles\cmTC_bd7f0.dir\build.make /nologo -L CMakeFiles\cmTC_bd7f0.dir\build' : return code '0x2'
@jllllll @Loufe okay after fighting my way through github actions hell I have a process to build wheels and attach them to releases, now each time llama-cpp-python pushes a new version a release will be created with pre-built wheels you can install.
https://github.com/abetlen/llama-cpp-python/releases/tag/v0.1.30
please what is the final solution for this error
Hey @Alamin-pro you can either try to install microsoft visual studio c++ which comes with a c compiler, or head over to the Releases, find the wheel version that matches your python version / operating system and then you can just pip install https://github.com/abetlen/llama-cpp-python/releases/download/v0.1.32/llama_cpp_python-0.1.32-cp310-cp310-win_amd64.whl
Now that the wheel is available, I consider this issue solved on my side. I managed to cleanly integrate llama-cpp-python into my project by adding
llama-cpp-python==0.1.30; platform_system != "Windows"
https://github.com/abetlen/llama-cpp-python/releases/download/v0.1.30/llama_cpp_python-0.1.30-cp310-cp310-win_amd64.whl; platform_system == "Windows"
to the requirements.txt
as suggested by @jllllll.
Hey @Alamin-pro you can either try to install microsoft visual studio c++ which comes with a c compiler, or head over to the Releases, find the wheel version that matches your python version / operating system and then you can just
pip install https://github.com/abetlen/llama-cpp-python/releases/download/v0.1.32/llama_cpp_python-0.1.32-cp310-cp310-win_amd64.whl
thanks @abetlen it work soo fine, thanks everyone for the hardwork
@oobabooga sounds good! Let me know if y'all run into any more issues, happy to help. The wheel should be built for each new version so updating is straightforward. Cheers!
thanks ever so much, really really appreciate, but it looks like we are going to be installing the models locally and then passing the path to the Llama class, does this mean this built is compatible with all of the llama models and other models that are base on llama like alphaca. and even GPT4ALL... ?
@Al-aminI yes any model that is compatible with llama.cpp should work with this package
wow this is really great
@Al-aminI all thanks to the awesome llama.cpp, this is just a wrapper
@abetlen while i install the wheel i got that error ERROR: llama_cpp_python-0.1.26-cp310-cp310-win_amd64.whl is not a supported wheel on this platform. i used that command. pip install https://github.com/Loufe/llama-cpp-python/raw/main/wheels/llama_cpp_python-0.1.26-cp310-cp310-win_amd64.whl do you think what's the problem
@Muhamad-Nady Either your python version is different to what the wheel is made for or wrong os. Latest wheels for all os and python versions can be found here: https://github.com/abetlen/llama-cpp-python/releases
Just replace the link in the pip command with the download link for the wheel you want.
@Muhamad-Nady Either your python version is different to what the wheel is made for or wrong os. Latest wheels for all os and python versions can be found here: https://github.com/abetlen/llama-cpp-python/releases
Just replace the link in the pip command with the download link for the wheel you want.
yes thank @jllllll it was the python version it working okay now.
@Muhamad-Nady Either your python version is different to what the wheel is made for or wrong os. Latest wheels for all os and python versions can be found here: https://github.com/abetlen/llama-cpp-python/releases Just replace the link in the pip command with the download link for the wheel you want.
yes thank @jllllll it was the python version it working okay now.
How did you check the python version and fixed it? I'm also stuck in same issue.
@hskhawaja Use Python -V
I How can I choose which version of llama-cpp-python to use when installing on WSL2?
And why llam-cpp version in "requirement.txt" is for windows whereas the install is designed for linux?
Maybe my fault, I forgot to do sudo apt install build-essential
@remybonnav If you don't want to compile it yourself, you can install the Linux wheel with:
python -m pip install https://github.com/abetlen/llama-cpp-python/releases/download/v0.1.36/llama_cpp_python-0.1.36-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Actually, doing sudo apt install build-essential At the beginning of whole install of oobabooga webui completely solved the problem.
But I still can't load Vincuna ....
For the lost souls out there, just download and install this. https://visualstudio.microsoft.com/vs/features/cplusplus/
Trying to install with
on Windows in a micromamba environment resulted in the following error. It seems like the package is looking for Visual Studio, which is not installed on my system.
Is it possible to make it such that the package can be installed without the need for Visual Studio?