-
Can you please add local LLM support, please?
Ollama support will be nice too.
Thank you.
-
Would love to have local llm support through llmstudio or ollama
-
Instead of using OpenAI (#69), we want to use a local model that runs on the device (makes it free!).
-
Hey, looking a good initiative.
I have locally downloaded local LLms, can't those be used with this project? Why do I need API Keys if I don't want to use those platforms.
I have LM Studio as w…
-
Hi, can you please provide a guide or support to use local llm models like Ollama lama3.1 8b or 70b
-
My code:
```
import typing as t
import asyncio
from typing import List
from datasets import load_dataset, load_from_disk
from ragas.metrics import faithfulness, context_recall, context_precisi…
-
Hi, thanks for building and opening Savvy!
Is there any way I can configure it to use a locally-running LLM? With OpenAI-compatible API or otherwise.
Thanks!
-
How do we use the openai/chatgpt prompt system with koboldai or textgen webui api's?
-
The Tile prompter currently links to Huggingface.
It would be better to give users the customizability options, and capability, of local VLM & LLM models.
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
local LLM dont't work for subdomains and vulns
### Expected Behavior
local LLM
### Steps T…