-
**LocalAI version:**
1.18.0
**Environment, CPU architecture, OS, and Version:**
Linux 78b4ecbb1b9f 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 GNU/Linux
**D…
-
### What would you like to see?
I tried to upload a big PDF file which was about 330M for 3 times and they were all failed.
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
-
### How are you running AnythingLLM?
AnythingLLM desktop app
### What happened?
New user of anything llm. was able to embed so far with open ai embedder but wanted to try the default model instead…
-
**Version:** 2.4.3
```
Setting model to LocalAI: llama-2-uncensored-q4ks
[Error: No chat model set] { isTrusted: [Getter] }
*** DEBUG INFO ***
user message: test
model: llama-2-uncensored-q4ks
…
sopyb updated
5 months ago
-
**Is your feature request related to a problem? Please describe.**
**Describe alternatives you've considered**
**Additional context**
-
**LocalAI version:**
Git commit id: 9723c3c21d4f2a9fdb91bb2f17e8319a810e6cca
**Environment, CPU architecture, OS, and Version:**
Darwin Kernel Version 22.6.0: Tue Nov 7 21:40:08 PST 2023; root…
-
**LocalAI version:** Master (https://github.com/mudler/LocalAI/commit/7641f92cdedd8f551f9e999402a726ae4e02dca9)
**Describe the bug**
current commit for llama.cpp in Makefile is:
```
CPPLLAMA_VER…
-
**Description**
It would be awesome if there was an API (or openAI API extension) endpoint that you could use to:
- load a model
- unload a model
- list available models
This would allow ho…
-
This card is a tracker for https://github.com/ggerganov/llama.cpp/issues/3969
This seems to happen to me as well with the llama.cpp backend only: I can reproduce it programmatically with certain te…