-
**LocalAI version:**
185ab93 local build
**Environment, CPU architecture, OS, and Version:**
Intel i9-10850K CPU @ 3.60GHz, RTX 3090, Ubuntu 20.04
Linux Jiminthebox 5.15.0-113-generi…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…
-
I have a local model (gguf) that I'm using to build a chatbot. The chatbot should be able to answer questions about my application and allow users to add entries to the database. To achieve this, I ha…
-
Hey :wave: LocalAI (https://github.com/mudler/LocalAI) author here - nice project!
**Is your feature request related to a problem? Please describe.**
I'd like to run this locally with LocalAI - o…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I hav…
-
This means you can't use lmsys/fastchat or custom openai endpoint to host a custom openAI endpoint without renaming the llms after openai llms.
-
### What is the issue?
### My GPU setup is:
1. RTX 3090 - first PCIE 5.0 x16, but secondary GPU
2. RTX 4090 - second PCIE 4.0 x4, but primary GPU
So, I have a weird bug with memory estimations…
-
Since Ree6 started as "a copy" of Mee6, we decided that we would try to offer all features that Mee6 offers but completly for free.
This is a major task and will take time, but please note that we st…
-
I saw a comment from Dave on Youtube that ACE_Framework can be used with local models but it appears the Stacey demo is not capable yet of running on a local-llm.
How can you use Stacey with a loca…
-
**Describe the bug**
(I'm not quite sure whether it's a bug or a setup issue, however this may help others in the future.)
I have Ollama and LocalAI running on my desktop PC (Windows 11). I installe…