-
This means you can't use lmsys/fastchat or custom openai endpoint to host a custom openAI endpoint without renaming the llms after openai llms.
-
**Is your feature request related to a problem? Please describe.**
When generating chat completion, it is hard-coded to generate a non-standard prompt template that looks something like:
```
### …
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
-
**LocalAI version:**
Container Image: `quay.io/go-skynet/local-ai:v2.14.0-cublas-cuda12-ffmpeg-core`
**Environment, CPU architecture, OS, and Version:**
Running on K8s
**Describe the…
-
Presently it is very hard to get a docker container to build with the rocm backend, some elements seem to fail independently during the build process.
There are other related projects with functiona…
-
**Describe the bug**
I am trying to use function calling using local LLM. With Ollama I could not find a way to do it yet.
With LocalAI however thay have a full support for function calling w…
-
1) add support for custom openai api, like
https://github.com/TheR1D/shell_gpt?tab=readme-ov-file#localai
for example, when used for local/hosted models that use custom openai server
2) add suppo…
-
Hi :wave: !
Very nice project! any plans/interest in running local models, with something like LocalAI? https://github.com/go-skynet/LocalAI
I'd be happy to take a stab at it if there is intere…
-
### Feature request
Integration with LocalAI and with its extended endpoints to download models from the gallery.
### Motivation
LocalAI is a self-hosted OpenAI drop-in replacement with support for…
-
Modify the bot.py help function to optionally include an introduction message. This might be useful to insert some link to an acceptable use policy or give some generic introduction to users.