-
Hey, looking a good initiative.
I have locally downloaded local LLms, can't those be used with this project? Why do I need API Keys if I don't want to use those platforms.
I have LM Studio as w…
-
Hi, can you please provide a guide or support to use local llm models like Ollama lama3.1 8b or 70b
-
Evaluate using https://github.com/xenova/transformers.js as local executor or fallback for when webgpu is not available
-
I deployed Qwen2.5-14B-Instruct on my local server and started llm correctly using vllm.
But when I executed the sample code,
```
from paperqa import Settings, ask
local_llm_config = dict(
…
-
How do we use the openai/chatgpt prompt system with koboldai or textgen webui api's?
-
Hi! How can I replace openai's model with local offline llm? Look forward to your replay!
-
### 🔖 Feature description
Finally I recently added a swappable base_url for openai client, thus if you configure docsgpt with LLM_NAME=openai
You can run any model you want locally with openai compa…
-
My code:
```
import typing as t
import asyncio
from typing import List
from datasets import load_dataset, load_from_disk
from ragas.metrics import faithfulness, context_recall, context_precisi…
-
Hi, thanks for building and opening Savvy!
Is there any way I can configure it to use a locally-running LLM? With OpenAI-compatible API or otherwise.
Thanks!
-
What about custom/private LLMs. Will there be an option to use some of longchain local features like llama.cpp?