Closed Jipok closed 1 year ago
@Jipok Convenience is a tradeoff I'm okay with for the time being-- tho I have plans to eventually allow for custom backends as opposed to being reliant on llama.cpp. If you have any ideas/suggestions on how to simplify the whole [or parts of the setup I'd love to hear them. I'll be creating a discord space at some point this week.
If you have any ideas/suggestions on how to simplify the whole [or parts of the setup I'd love to hear them.
make
call.In Model Download
section:
The 7B Llama-2 based model TheBloke/WizardCoder-Python-13B-V1.0-GGUF is a model fine-tuned by a kind redditor
Some very strange sentence with several logical errors. It seems you had a link to another model before.
This funny
The first thing the user sees after launch is: I think many, like me, will try to open http://127.0.0.1:8081 and get: Make it obvious which link to open, preferably hide the info from llama.cpp Perhaps you should automatically open the desired link in the browser. Although I didnβt like it before, many projects do this and Iβm used to it.
I addressed most of the issues-- I'll be merging the PR so feel free to open an issue for the outstanding changes. Thanks a bunch!
Although in my opinion submodules is not very convenient (for you), it is still preferable than what was before.