-
I believe, in order to resolve https://github.com/mudler/LocalAI/pull/1446, go-llama.cpp needs to be built against at least version 799a1cb13b0b1b560ab0ceff485caed68faa8f1f of llama.cpp to enable mixt…
-
Ollama current doesn't support [Open AI Compatible Function Calling](https://github.com/ollama/ollama/issues/2790) but there are models such as [Hermes 2 Pro](https://huggingface.co/NousResearch/Herme…
-
I want to use localai instead of openai. How can I modify the OPENAI_API_BASE used in cursor?
-
**LocalAI version:**
1.22.0
**Environment, CPU architecture, OS, and Version:**
WSL Ubuntu via VSCode
Intel x86 i5-10400
Nvidia GTX 1070
Windows 10 21H1
uname -a output:
Linux DESKTO…
-
./local-ai --models-path models/ --context-size 4000 --threads 4
9:09PM DBG no galleries to load
9:09PM INF Starting LocalAI using 4 threads, with models path: models/
9:09PM INF LocalAI version: v…
-
Hi all !
So when run with pnpm dev or pnpm build on several computer on manjaro always WebKitWebProces use 100%..to ..10x% and if go to right clic
inspect element on the app and record event i see…
-
**LocalAI version:**
at * f227e91 (origin/master, origin/HEAD) feat(llama.cpp): Bump llama.cpp, adapt grpc server (#1211)
**Environment, CPU architecture, OS, and Version:**
Mac Studio M2 Ult…
-
### Expected Behavior
Returns a valid response and not a timeout error
### Actual Behavior
Ends up by returning a timeout error. I am unsure if it's a bug or something I am doing wrong, but the exa…
-
**LocalAI version:**
63e1f8fffd506cd156e60b65359446536e4c3e41
**Environment, CPU architecture, OS, and Version:**
M1 MBP 16gb, arm, MacOS Sonoma 14.0
**Describe the bug**
Go dependency `g…
-
using openai api is a little expensive to generate ,can add support for local llama ....