-
**LocalAI version:**
LocalAI version: v2.15.0
**Environment, CPU architecture, OS, and Version:**
Linux Ubuntu-2204-jammy-amd64-base 5.15.0-107-generic #117-Ubuntu SMP Fri Apr 26 12:26:49…
-
I decided to open single and specific tickets instead of a unique one [2944](https://github.com/FlowiseAI/Flowise/issues/2944) because after a bit of investigation the KO on different embeddings have …
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
-
Thanks for your work with this integration. Works perfectly with openai.
Would it be possible to add ollama as a supported provider? I tried adding it using the "localAI" provider with the port ch…
-
Docs has
```
run://quay.io/kairos/community-bundles:localai_latest
```
Which correct url should be
```
run://quay.io/kairos/community-bundles:LocalAI_latest
```
https://kairos.io/docs/examples…
-
1) add support for custom openai api, like
https://github.com/TheR1D/shell_gpt?tab=readme-ov-file#localai
for example, when used for local/hosted models that use custom openai server
2) add suppo…
-
### Bug Description
After clean installing the llama-index, I am getting following error:
`No module named 'openai.openai_object'`
when running almost anything from llama-index e.g:
```
from ll…
-
**Is your feature request related to a problem? Please describe.**
Hello, I tried `ollama` on my macbook and got pretty good performance compared to running `LocalAI` with `llama-stable` direct…
-
Mistral (known for their [7B model](https://mistral.ai/news/announcing-mistral-7b/) and more recently their [Mixture of Experts model](https://mistral.ai/news/mixtral-of-experts/)) have recently start…
-
Langchain4j is at it's current state already an awesome product.
And the recent Ollama integration, makes it possible to have a powerfull LLM running locally, for chat requests.
What it is unfortu…