ParisNeo / lollms-webui

Lord of Large Language Models Web User Interface
https://lollms.com
Apache License 2.0
4.32k stars 543 forks source link

Build fails while building wheel for llama-cpp-python #206

Closed chdelacr closed 1 year ago

chdelacr commented 1 year ago

Expected Behavior

Builds completes

Current Behavior

Build fails while building wheel for llama-cpp-python

Steps to Reproduce

  1. Run webui.bat
  2. Build fails

Possible Solution

Installing VS, but hopefully there's another solution

Context

Error log

Screenshots

Log provided

chdelacr commented 1 year ago

Just installed VS 2022 with C++ compiler packages, but it didn't work. Now getting this output: https://hastebin.skyra.pw/quxinaxote.csharp

BTW, it didn't ask me to download any model during the installation process.

Update: I had to manually copy personalities in the root folder, and somehow an empty "models" folder is created in my D: drive. Repository is cloned at D:\GitHub\gpt4all-ui\GPT4All\

Nyandaro commented 1 year ago

Hi I just tried deleting VS and reinstalling it again. I think I got the same error, but I can't see your SQL LOG lol According to the link I wrote in my issue #204, which was closed yesterday, VS was VS2019, not VS2022. Doesn't work on VS2022 but compiled fine on VS2019

The important thing is MSVC v142 vs2019 c++ x64/x86 build tool windows 10 SDK windows C++ CMake tool

I think it's these 3

Nyandaro commented 1 year ago

It's probably llama_cpp_python-0.1.51 that caused the problem So delete it once with "pip uninstall llama-cpp-python" It is better to newly "pip install llama-cpp-python" Otherwise, llama-cpp-python in the Python folder may not have changed to 0.1.52

You may not need to install VS if you do this

chdelacr commented 1 year ago

Thanks @Nyandaro

I'll try this and come back with results

chdelacr commented 1 year ago

It's probably llama_cpp_python-0.1.51 that caused the problem So delete it once with "pip uninstall llama-cpp-python" It is better to newly "pip install llama-cpp-python" Otherwise, llama-cpp-python in the Python folder may not have changed to 0.1.52

You may not need to install VS if you do this

I got this with the pip commands, I think it's the same output from the .bat script image

Update: Well, IDK if installing VS19 and C++ stuff will do something different, I'm just getting this output and it can't build model. If I open the UI and select the backend, the model appears in dropdown but it can't be used at all

Requirement already satisfied: marshmallow-enum<2.0.0,>=1.5.1 in d:\github\gpt4all-ui\gpt4all\env\lib\site-packages (from dataclasses-json<0.6.0,>=0.5.7->langchain->pyaipersonality>=0.0.14->-r requirements.txt (line 14)) (1.5.1)
Requirement already satisfied: typing-inspect>=0.4.0 in d:\github\gpt4all-ui\gpt4all\env\lib\site-packages (from dataclasses-json<0.6.0,>=0.5.7->langchain->pyaipersonality>=0.0.14->-r requirements.txt (line 14)) (0.8.0)
Requirement already satisfied: mypy-extensions>=0.3.0 in d:\github\gpt4all-ui\gpt4all\env\lib\site-packages (from typing-inspect>=0.4.0->dataclasses-json<0.6.0,>=0.5.7->langchain->pyaipersonality>=0.0.14->-r requirements.txt (line 14)) (1.0.0)
Checking models...
Virtual environment created and packages installed successfully.
Launching application...
 ******************* Building Backend from main Process *************************
llama-cpp-python is already installed.
Backend loaded successfully
 ******************* Building Personality from main Process *************************
 ************ Personality gpt4all is ready (Main process) ***************************
Checking discussions database...
Please open your browser and go to http://localhost:9600 to view the ui
debug mode:false
 ******************* Building Backend from generation Process *************************
llama-cpp-python is already installed.
Backend loaded successfully
Couldn't build model
unsupported operand type(s) for /: 'WindowsPath' and 'NoneType'
 ******************* Building Personality from generation Process *************************
 ************ Personality gpt4all is ready (generation process) ***************************
No model loaded. Waiting for new configuration instructions
Listening on :http://localhost:9600
Nyandaro commented 1 year ago

It seems that the WEB GUI is working. Thank you for confirming.

Next time you have to match the MODEL Actually, WEB GUI does not work with all models Most of the items posted on the top page are not working now. I think this is due to the recent large-scale modification of llama.cpp. Therefore, I recognize that the MODEL that can be used in the future is the V2 version.

So from Hugging Face etc. Select and download the new converted MODEL for GGML with CPU inference type If you are running GPU driven, choose GPTQ with GPU inference type

For reference, I often check

https://huggingface.co/TheBloke

I use GGML type in his model

Note that it seems that the one written as V3 can not be used yet

By the way, I still don't know how to convert

chdelacr commented 1 year ago

@Nyandaro thanks for your help, I'll take a look at these models and check if they work.

here might be some corrections needed as the script is looking for personalities in the root folder and not in the installations one, but that's not a big deal for now.

ParisNeo commented 1 year ago

The thing is that you are supposed to run the add_personality script from the root folder :) in a cmd or bash shell :)

But don't worry, I am building the personality zoo ui. Which means that you'll do all this from the main interface.