-
Currently the heartbeat response is simply a signed UUID. We should also return the active task types as an array of strings, i.e. `['synthesis', 'taskfoobar']`.
-
### What is the issue?
Hi, i'm using llama2 models, and when i asked the Ai to explain something, it does responds and explain it, but when it reached the end, it instead print out certain line,
``…
-
可以参考下面这个文档
https://mp.weixin.qq.com/s/US-qPUvMLp7TWEGCeh2EiQ
视频
https://www.youtube.com/watch?v=NYRUC0v50DI&ab_channel=KevinThomas
ollama-voice
https://github.com/maudoin/ollama-voice
实现的功…
shake updated
2 months ago
-
Would you consider to support JSON mode output, just like Llama.cpp, Ollama, and OpenAI do?
e.g. https://llama-cpp-python.readthedocs.io/en/latest/#json-and-json-schema-mode
It is very limited i…
-
I had some troubles getting this plugin running because it is using port 11343 that port was already being used by my local copy of ollama service.
The error from docker of course wasn't very spec…
-
https://github.com/ollama/ollama
This is a service that deploys large models locally and will provide an API interface. When can it be integrated?
Here is the list of supported models: https://ollam…
-
We are developing some application on ollama and the performance would be ok for a user, however when we are developing software the lag time to generate can be very slow.
Would it be possible to…
-
### Feature Area
Chat
### Painpoint
Hello I am sorry if this is the wrong place/categorization. I don't know who's responsible (the model or smart2Brain)
It does not give back the content…
-
Not all LLMs support calling functions in a robust way.
Some providers like Ollama support json mode.
Others don’t and have to rely on other parsing techniques, in addition to supporting many func…
-
### What happened?
I am trying to run `ollama/dolphin-phi` model on ollama but /chat/{chat_id}/question throws `{"error":"model 'llama2' not found, try pulling it first"}` error. I don't want to loa…