-
### 🔖 Feature description
Finally I recently added a swappable base_url for openai client, thus if you configure docsgpt with LLM_NAME=openai
You can run any model you want locally with openai compa…
-
### What feature would you like to be added?
How Magentic-One be used with local LLMs or Ollama?
### Why is this needed?
This will enable users to use Magentic-One with open-source LLMs other than …
-
## Description
I am encountering a timeout error when running the following code on macOS. The error occurs approximately 10 seconds after the request is made. I would like to know if there is a wa…
-
When using large models like Llama2:70b, the download files are quite big.
As a user with multiple local systems, having to `ollama pull` on every device means that much more bandwidth and time spent…
-
**问题描述 / Problem Description**
用简洁明了的语言描述这个问题 / Describe the problem in a clear and concise manner.
TypeError: error when get /knowledge_base/list_knowledge_bases: Client.__init__() got an unexpecte…
-
Hi.
Thank you for this cool server. I am developing an open source AI tool that is compatible with multiple services/models. And ollama is one of them. Except that I need to use it with multiple cl…
-
**Current Documentation**:
[API documentation](https://github.com/ollama/ollama/blob/4759d879f2376ffb9b82f296e442ec8ef137f27b/docs/api.md?plain=1#L79) states:
> A stream of JSON objects is retur…
-
**Describe the bug**
A clear and concise description of what the bug is.
Unable to use OLLAMA local model
**To Reproduce**
Steps to reproduce the behavior:
1. Install using pip
2. interprete…
-
I have ollama and miniconda in i7 7gen, 1070gtx, 16gb ram.
I change configs yaml like medium.en for medium (because i wan speak in spanish), and change sites "en" for "es". But for in local can hav…
-
I have deployed the proxy in a docker container on a Linux server. When accessing the service with a Windows client I get this error message:
```
ollama pull --insecure http://my-proxy-server.lan:…