ParisNeo / lollms-webui

Lord of Large Language Models Web User Interface
https://lollms.com
Apache License 2.0
4.37k stars 553 forks source link

Not-working from the start. Getting started? #185

Closed RobSeder closed 1 year ago

RobSeder commented 1 year ago

Expected Behavior

I would expect a relatively low barrier-to-entry to see functionality working.

Current Behavior

The app come with no models, which is understandable. However, it's loosely mentioned to "install some models". That can mean many things with many different file formats. So, I started with nomic-ai/gpit3all .bin models, after figuring out that I needed to create a new ./models/gpt4all/ folder. However, when I attempt to use them, I get an error of:

invalid model file './models/gpt4all/ggml-gpt4all-j-v1.3-groovy.bin' (too old, regenerate your model files or convert them with convert-unversioned-ggml-to-ggml.py!

When I look up that script, it looks like it might come from this llama.cpp project? https://github.com/ggerganov/llama.cpp/commit/723dac55fa2ba7adc6e3fc8609781d1ad0378906

It seems like I'm missing something. How did everyone so easily get up-and-running. This is not a working product, out-of-the-box. There is this matter of models, with a lot of nuance, and no documentation. What am I missing?

Steps to Reproduce

Clone the repo, do a docker compose build, then up - then notice that you can't use the app without any models.

ParisNeo commented 1 year ago

OK, I'll build a graphical installer just like the gpt4all-chat project to make it easy for people to use the tool. I just need to finish working on V2. It is not easy at all. But we are progressing.

RobSeder commented 1 year ago

@ParisNeo that's not needed, just documentation. Even just a few sentences to give us some clues on where to head. What am I missing? Thank you for your efforts!

ParisNeo commented 1 year ago

Well, you have to install using eitheir webui.bat or webui.sh depending on your os. You need to have python 3.10 and git installed and set in your path before doing this.

Once this is done, select your backend (llama_cpp is the best for now). Then download a model to your models/llama_cpp folder. Be aware that llamacpp supports only llama_cpp models, you can use gpt_j for gpt-j models. To select the model, just open your configs/local_default.yaml if you have none, copy configs/default.yaml to configs/local_default.yaml and modify it.

Once you select the right backend and the right model, you run webui.bat or .sh again and it will launch the application.

You can do what webui does manually by: activating the envirtonment (it is in env folder), then launch the application python app.py

I hope this helps

RobSeder commented 1 year ago

Thank you!!

ollybrain commented 1 year ago

i get an error since today from the app-py data, when i start the installation debian 11 vm, withy python 3.10 with started envirtonment

python3.10 app.py * Building Binding from main Process *** Loading binding llama_cpp_official install ON Ungültiger Maschinenbefehl

ParisNeo commented 1 year ago

Hi. This looks like they added some specific instructions that your hardware doesn't seem to support. This unfortunately happens alot with these tools. They change their code without considering retrocompatibility. That's why i support multiple bindings so that if one breaks, you still can use another. Try ctransformers it is the most stable and covers the widest hardware. You can complain on the python-llama-cpp github: https://github.com/abetlen/llama-cpp-python