-
### Bug Description
cannot link Ollama local serve. Ollama and ChatNext are both latest version. I can run get Ollama response from python script,so the server is OK.
### Steps to Reproduce
![微信图片_…
-
Hi, Since the latest release of calude, it is now a better choice for various tasks and I was wondering if calude API can be supported in Intellibar. Alongside that, Ollama is a great way to interface…
-
Docker image installed on multiple Linux and Mac systems, both with and wo GPUs.
Local proxy settings set in daemon.json and well as passed to docker with -e and --env
API interface works fine, bu…
-
Where are all project files like app.py, requirements.txt, etc?????
-
would be super helpful to set temperature for models via command line, rather than having to create a separate model file for every model and temperature combination.
-
**Why**
Just like Big-AGI can interface with local LLM (for instance, ollama), it would be cool to have the ability in the Draw section to interface with Automatic1111 which possesses an API (link pr…
-
Putting this here for communication and bc if I don't take notes I will loose my train of thought. Thanks ChatGPT for helping me organize this.
---
Creating a macOS application that integrates with…
-
"Feature Request: Enhance Project with Support for Additional Large Language Models (LLMs) - Including Local AI Assistants
I've been utilizing your project, and it's truly impressive! I wanted to p…
-
We have a growing number of different API providers: OpenAI, Azure, Cloudflare Getaway. At the same time other tools keep adding their OpenAI-compatible APIs (e.g. Ollama).
Considering #693, here'…
-
Whenever I try to use the AI feature, it just responds with `Protocol "" is unknown`
I've tried this with both Llama3 and dolphin-phi (which works better on my craptop), and it errors out either wa…