-
**LocalAI version:**
Maybe more than one
**Environment, CPU architecture, OS, and Version:**
Any
**Describe the bug**
There appears to be an inconsistency between the documentation and …
-
Jina Embeddings are a promising current model for embeddings used in RAG.
https://huggingface.co/jinaai/jina-embeddings-v2-base-en
While they are available via huggingface transformers, the current …
kno10 updated
3 months ago
-
Hi there,
I am currently running `https://github.com/acon96/home-llm` to integrate an LLM running on `https://github.com/mudler/LocalAI/` on a dedicated host.
I tried installing this here presen…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…
-
**Description:**
Being able to use other endpoints will greatly help with adoption and flexibility on the users end. OpenAI compatible endpoints are abundant in both self hosted and enterprise soluti…
-
### Describe the bug
Not sure if it's a local-ai (or rather one of its dependency llama-cpp/gpt4all) but I can't leverage GPU inference of my nvidia RTX3060 because of `ggml_cuda_init: failed to in…
teto updated
5 months ago
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
-
### ⚠️ This issue respects the following points: ⚠️
- [x] This is a **bug**, not a question or a configuration/webserver/proxy issue.
- [x] This issue is **not** already reported on [Github](https://…
-
**LocalAI version:**
quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-12
**Environment, CPU architecture, OS, and Version:**
Ryzen 7500, Nvidia 1660 GPU, OpenSUSE Tumbleweed
**Describ…
-
hi there.
Thanks for the integration. I've been looking for integrations that incorporate LocalAI into HA and there are only 2, or maybe just yours depending on how you count the other.
There wa…