withcatai / catai

Run AI ✨ assistant locally! with simple API for Node.js 🚀
https://withcatai.github.io/catai/
MIT License
443 stars 28 forks source link

Can't Install model #5

Closed sam1am closed 1 year ago

sam1am commented 1 year ago

I tried this on the 22nd and was able to install models but not get it to serve (it complains model not found).

With the latest version it doesn't appear to be installing models anymore.

catai install Vicuna-13B
$ cd /usr/lib/node_modules/catai

When I run install I just see a cd command echo'd out to the terminal and nothing else. Same thing if I try to run it from that directory.

sam1am commented 1 year ago

I tried this in Windows as well and got the same result.

ido-pluto commented 1 year ago

Thank you for reporting, I am facing the same issue - I am on that :)

ido-pluto commented 1 year ago

The problem is the URL of the model change, for the meantime, you can install the model directory from a link:

catai install https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vic13b-q4_0.bin
mledwards commented 1 year ago

image

I tried that install line, and got this one when serving

ido-pluto commented 1 year ago

I know there is a bug with installing by version. I will look into that

mledwards commented 1 year ago

Thank you. I really appreciate the work you're putting in here. This sort of library helps simplify the crazy world of open source LLMs.

ido-pluto commented 1 year ago

The problem is a new GGML format that is not yet supported by llama.cpp, for now, you can try the old format:

https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/819b698c54b906ca7f3e1774cf748c2018932b07/ggml-vic13b-uncensored-q4_0.bin
ido-pluto commented 1 year ago

The model links are now updated, so this issue should be fixed until llama.cpp can handle ggml v3.

mledwards commented 1 year ago

Works for me! Thank you sir.