-
### What kind of request is this?
None
### What is your request or suggestion?
https://github.com/mudler/LocalAI/issues/2491
### Are you willing to submit PRs to contribute to this feature…
-
**Describe the bug**
I am trying to use function calling using local LLM. With Ollama I could not find a way to do it yet.
With LocalAI however thay have a full support for function calling w…
-
I have my own fine-tuned model and i have placed in the downloaded folder and can be detected by Native LLM selection. But as i prompt something, it crashes by saying [Failed to load model]. Need help…
ghost updated
4 months ago
-
**Is your feature request related to a problem? Please describe.**
**Describe the solution you'd like**
These `instructor` models are well tuned for embeddings, as demonstrated [here](ht…
-
### How are you running AnythingLLM?
Docker (local)
### What happened?
Running lastest AnythingLLM on Docker in Linux Ubuntu 20.4 Server
Needed to use Lance_revert image though to prevent crashes
…
-
> I'd like to fix those test failures, but I also want to ship a 0.13.1 release quickly. So I'm going to mark a bunch of tests as expected to fail on Windows.
_Originally posted by @simonw in https…
-
given a list of files, strings rank
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
Hi,
Thanks for elia !
Would it be possible to have localai support ?
Thanks again
-
**LocalAI version:**
v2.16.0
**Environment, CPU architecture, OS, and Version:**
Running inside a container on linux/amd64, avx2
**Describe the bug**
WebUI chat stops updating the UI,…