-
Hi!
I propose a feature to support third-party API providers such as LocalGPT:
![image](https://github.com/CtrlAltFocus/obsidian-plugin-auto-tag/assets/4013062/a130b5a8-6b87-402d-840a-d66f316b8834)
…
Moyf updated
3 months ago
-
**Description**
Please consider adding Core ML model package format support to utilize Apple Silicone Nural Engine + GPU.
**Success Criteria**
Utilize both ANE & GPU, not just GPU on Apple Sili…
-
Hi there,
Thank you for a wonderful piece of software, it has a very friendly interface and a great take on usability.
I am using ollama with [ollama_proxy_server](https://github.com/ParisNeo/ol…
-
**Problem Description**
The llava model in Ollama supports image input, but after selecting the llava model in chatbox, images cannot be sent, prompting `Current model does not support image input`
…
-
### What are you trying to do?
I am new to ollama (including llama.cpp, of course), so my questions may be a bit silly.
My use case is to serve both CLIP and LLaVA (which combines clip and mistral…
-
### 💻 系统环境
macOS
### 📦 部署环境
Docker
### 🌐 浏览器
Chrome
### 🐛 问题描述
![1715781855586](https://github.com/lobehub/lobe-chat/assets/16172523/7781693b-0a03-426d-8193-0cdae16327f3)
Ollama连通性检查失败,我通过配置do…
-
It would be nice if gitlens's AI features could integrate with LLMs running locally. For example via [ollama](https://github.com/ollama/ollama). Not everybody can use the cloud for one reason or anoth…
-
simple codes like below
`ollama.chat(model='mistral:instruct', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])`
OR
```
import ollama
response = ollama.chat(model='mistral:i…
-
### Describe your problem
But LLM limited. Got Ollama - Mistral instance running at 127.0.0.1:11434 but cannot add Ollama as model in RagFlow. Please assist. This software is very good and flexib…
-
Hey all, I thought I was having the same problem as described by this previously closed issue:
https://github.com/joaomdmoura/crewAI/issues/21
it turns out that I was actually experiencing the f…