-
Please refer to the [troubleshooting](https://github.com/ido-pluto/catai/blob/main/docs/troubleshooting.md) before opening an issue. You might find the solution there.
**Describe the bug**
CatAI r…
-
Hello 👋
In following [development.md](https://github.com/ido-pluto/catai/blob/main/docs/development.md) to run the Server locally, I'm getting the following error below when starting the Server. Wo…
-
**Describe the bug**
I have setup catai, downloaded a model, the web interface opens up, I see this in the console of the server:
```
new connection
```
but as soon as I type anything in the web …
-
Hello,
I am looking to see where/how cpu and/or gpu information is passed during server start but I am unable to find it.
Thank you
-
Spinning off into it's own issue.
Please refer to the [troubleshooting](https://github.com/ido-pluto/catai/blob/main/docs/troubleshooting.md) before opening an issue. You might find the solution th…
-
```
C:\Users\micro\Downloads>catai serve --ui chatGPT
$ cd C:\Users\micro\AppData\Roaming\npm\node_modules\catai
$ npm start -- --production true --ui chatGPT
> catai@0.3.12 start
> node src/in…
-
It looks like the API streams the whole result to the server console before sending the output back as the response. Is there a way to return the results as soon as they're available?
Or if not, th…
-
Hi,
it would be good to have some kind of user mode and developer mode, which can be toggled with an environment variable.
So you have more parameters to choose from in developer mode and when you a…
-
-
**Describe the bug**
I get this error trying to use the vicuna 13b uncensored model
```
llama.cpp: loading model from /Users/jvisker/catai/models/Vicuna-13B-Uncensored
error loading model: unrec…