-
**Is your feature request related to a problem? Please describe.**
Nowadays, embedding + reranker is the SOTA solution to improve the accuracy of RAG system. We've already have the embedding API …
-
```
2024/01/25 10:13:00 gpu.go:137: INFO CUDA Compute Capability detected: 8.6
^Cuser@llm-01:~$ ollama serve
2024/01/25 10:14:17 images.go:815: INFO total blobs: 14
2024/01/25 10:14:17 images.go:8…
-
When I tried to use my own custom Llama model that I downloaded from internet, I see it says "Using a locally hosted LLM is experimental. Use with caution." When I tried to load a model by clicking "w…
-
### Checklist
- [X] I've searched for similar issues and couldn't find anything matching
- [X] I've included steps to reproduce the behavior
### Affected Components
- [ ] K8sGPT (CLI)
- [X] K8sGPT …
-
### Please confirm whether the following requirements are met:
- [ X] Must have a REST/HTTP API.
- [ X] Must be publicly accessible. It cannot require application review or have regional restricti…
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
-
Steps to reproduce:
1. run local ai with basic auth.
2. Configure nextcloud integration_openai with your basic auth username and password
3. Use smartpicker to generate image.
4. Image gets gen…
-
### Steps to reproduce
1. enable loca-ai community container
2. update your models.yml to this:
```yml
# Stable Diffusion in NCNN with c++, supported txt2img and img2img
- url: github:go-s…
-
[https://github.com/mudler/LocalAI](url)
Support can simly be added by
```
import openai
openai.base_api = "http://someinternalhost.local/v1"
```
The URL should be configured by the config.js…
-
### Description
Follow instructions to setup LocalAI Model Gallery. Click "Gallery Admin."
Available Models List: (truncated)
[ { "code": "invalid_type", "expected": "string", "received": "undefine…