Closed RobSeder closed 1 year ago
OK, I'll build a graphical installer just like the gpt4all-chat project to make it easy for people to use the tool. I just need to finish working on V2. It is not easy at all. But we are progressing.
@ParisNeo that's not needed, just documentation. Even just a few sentences to give us some clues on where to head. What am I missing? Thank you for your efforts!
Well, you have to install using eitheir webui.bat or webui.sh depending on your os. You need to have python 3.10 and git installed and set in your path before doing this.
Once this is done, select your backend (llama_cpp is the best for now). Then download a model to your models/llama_cpp folder. Be aware that llamacpp supports only llama_cpp models, you can use gpt_j for gpt-j models. To select the model, just open your configs/local_default.yaml if you have none, copy configs/default.yaml to configs/local_default.yaml and modify it.
Once you select the right backend and the right model, you run webui.bat or .sh again and it will launch the application.
You can do what webui does manually by: activating the envirtonment (it is in env folder), then launch the application python app.py
I hope this helps
Thank you!!
i get an error since today from the app-py data, when i start the installation debian 11 vm, withy python 3.10 with started envirtonment
python3.10 app.py * Building Binding from main Process *** Loading binding llama_cpp_official install ON Ungültiger Maschinenbefehl
Hi. This looks like they added some specific instructions that your hardware doesn't seem to support. This unfortunately happens alot with these tools. They change their code without considering retrocompatibility. That's why i support multiple bindings so that if one breaks, you still can use another. Try ctransformers it is the most stable and covers the widest hardware. You can complain on the python-llama-cpp github: https://github.com/abetlen/llama-cpp-python
Expected Behavior
I would expect a relatively low barrier-to-entry to see functionality working.
Current Behavior
The app come with no models, which is understandable. However, it's loosely mentioned to "install some models". That can mean many things with many different file formats. So, I started with nomic-ai/gpit3all
.bin
models, after figuring out that I needed to create a new./models/gpt4all/
folder. However, when I attempt to use them, I get an error of:When I look up that script, it looks like it might come from this llama.cpp project? https://github.com/ggerganov/llama.cpp/commit/723dac55fa2ba7adc6e3fc8609781d1ad0378906
It seems like I'm missing something. How did everyone so easily get up-and-running. This is not a working product, out-of-the-box. There is this matter of models, with a lot of nuance, and no documentation. What am I missing?
Steps to Reproduce
Clone the repo, do a docker compose build, then up - then notice that you can't use the app without any models.