-
Hi,
Great work on this project!
Would it be possible to add OpenAI compatible endpoints to the program as an option?
As a good example, [LocalAI](https://github.com/mudler/LocalAI/) compatibili…
-
Hi there,
I am trying to use slim models from llmware such as 'slim-sql-tool' with Ollama. but I need to create a prompt template in Modelfile and I was wondering what would it be look like. In your …
-
Why don't we expose the LocalAI web interface like we do with the mastercontainer? So that people can download and configure models without needing to worry about editing the ``modules.yaml`` file. A…
-
Hi,
you describe here https://github.com/mudler/LocalAI/discussions/2590 some ARM64 images for Mac, but the images are not available.
Did I look too soon?
Thanks in advance
Frank
-
I see text to image as a supported feature. How about image to text. There are quite a few capable multimodal self-host models these days such as moondream2 and minicpm2.6 that are supported in ollama…
-
**LocalAI version:**
latest-aio-gpu-nvidia-cuda-12
**Environment, CPU architecture, OS, and Version:**
```
$ uname -a
Linux server 6.1.0-21-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.90-1 (202…
-
Tools/functions should be supported in streaming mode. Currently, they work only in sync mode.
-
### Problem
I am using LocalAI with Zep.
```
llm:
service: "openai"
model: "gpt-3.5-turbo-1106"
openai_endpoint: "http://host.docker.internal:8080/v1"
```
I can define model for llm it…
-
### Is your feature request related to a problem? Please describe.
"The problem" I encounter is that the files like images which created are created by root.
When mounting the output to the host,…
-
**LocalAI version:**
localai/localai:v2.17.1-cublas-cuda12
**Environment, CPU architecture, OS, and Version:**
Linux sphinx 6.5.0-28-generic #29~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 4 …