-
### Describe the bug
When starting the project via Docker and having filled in the .env.local file with API keys and URLs of local LLM systems, the environment variables are not being picked up. Th…
-
## Description
When using plugin with an LLM model running on Ollama server hosted locally (e.g., on another server within the same local network), the plugin successfully connects to the Ollama AP…
-
### What happened?
Using config
```yaml
model_list:
- model_name: bge-large-en-v1.5
litellm_params:
model: huggingface/BAAI/bge-large-en-v1.5
api_base: http://localhost:80…
-
Server OS:
Homeasisstant OS on a Proxmox vm
Ai renderer OS:
Windows 10
**Approach**
After entering all connection credentials, the OpenAI API, OpenAI API admin key, as well as the model name …
-
I need to use PyGwalker locally and don't want to use OpenAI for Q&A. However, I can also run LLM models locally. How can I combine that local model with PyGwalker Q&A?
-
I am very exited by the idea of text to gql and would love to implement it for my organization. The context to send a long is pretty big though. This I’d love the option to use a local llama instance …
-
### Feature request / 功能建议
how to use a local LLM to evaluate prediction quality? For example, Llama-3-70B-Instruct?
### Motivation / 动机
how to use a local LLM to evaluate prediction quality? For …
-
https://ollama.com/download
-
### System Info
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
|--------------------…
-
ollama for example