-
```
✓ Ready in 667ms
○ Compiling / ...
✓ Compiled / in 1628ms (1571 modules)
✓ Compiled in 293ms (470 modules)
There was a problem with your fetch operation: [Error: Network response was not …
-
### Describe the solution you'd like
It would be great if we could chat with our memos. Using Ollama with Open WebUI to recall our memos and ask questions about memos we make.
### Type of featur…
-
### Describe your problem
Reading the related issue, it says to use ollama to start a local model, but `https://ollama.com/library` doesn't support ChatGLM,or needs a lot of work to support ChatGLM…
-
Would be nice to support self hosted LLMs. It doesn't have to be ollama, but it seems to be fairly easy to interface with.
-
Great piece of software @d42me! It'd be pretty easy to code the two main LLM entry points to allow a range of interfaces instead of just OpenAI. In particular, using ollama would open the whole system…
-
Hey there! I love the look of this project and this could easily become my favourite discord bot. The only issue is API compatibility, for example I would like to use Ollama as my API service.
Inte…
-
### The problem
The Ollama integration fails to connect to a local running instance of Ollama 0.1.40
### What version of Home Assistant Core has the issue?
core-2024.6.0
### What was the l…
-
我写了一个智能体,会把每轮的输出放到短时记忆里发送给大模型以便其下一轮的思考与推理,然而,**在第三轮交互时,我遇到了一个超出1024个token限制的错误。**
理论上,Qwen2-72B-Instruct模型支持的上下文长度可达**128k token**,这让我困惑是否是我在某处设置出了问题。我使用的是ollama部署的服务,并且在访问接口时,我已经将token长度的限制设置为了**8…
-
### Actual Behavior
[open-webui](https://docs.openwebui.com/) container not able to connect to the service running on the host machine
### Steps to Reproduce
- [Download and Install ollama](https:/…
-
I had written this at a time where the Ollama api didn't exist.
There is a lot of bloat around langchain and I'd like to get to something a bit more performant, especially when indexing and piping …