-
**Is your feature request related to a problem? Please describe.**
Currently, to use container image you need to build (compose) it every time.
**Describe the solution you'd like**
Build and publ…
-
The llama-cpp (not `go-llama`) misses features compared to go-llama, such as:
- [x] expose lora (now it needs also a scale factor)
- [ ] speculative sampling
- [x] embeddings
-
**LocalAI version:**
v2.19.2 and v2.19.3 (v2.19.1 is fine)
**Environment, CPU architecture, OS, and Version:**
Ubuntu 22.04, amd64
**Describe the bug**
Error below:
`DBG GRPC(Meta-Llam…
-
Would be super-awesome to have a small wizard, that might be disabled by default along - to setup an instance by installing few models from the gallery. Ideally would be cool to have as well a "lite" …
-
Since https://github.com/ggerganov/llama.cpp/pull/8644 it looks like is not possible to build anymore the rpc-server example with sycl enabled and Intel OneAPI:
```
#45 1217.2 gmake[6]: Entering d…
-
### What kind of request is this?
Improvement of existing experience
### What is your request or suggestion?
_No response_
### Are you willing to submit PRs to contribute to this feature request?
…
-
### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://docs.all-hands.dev/modules/usage/troubleshooting
- [X] I have checked the existing issues…
-
Hi thanks again
NISQA is unmaintend for long time and this PR could be interesting to help to install on modern python
https://github.com/gabrielmittag/NISQA/pull/47
Like
https://librosa.org/d…
-
### The Feature
LocalAI ( https://github.com/mudler/LocalAI ) offers a local OpenAI-compatible API that interfaces with locally hosted models like GGUF models using llama.cpp.
It would be great i…
-
**LocalAI version:**
Docker 3.6 Running on Docker-Container
**Environment, CPU architecture, OS, and Version:**
Linux 162fd9400319 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14…