-
### Do you need to file an issue?
- [x] I have searched the existing issues and this bug is not already filed.
- [x] My model is hosted on OpenAI or Azure. If not, please look at the "model providers…
-
TLDR: Create and Test Local LLM's for Podcastify
-
We don't have an API key for OpenAI, but we have other LLMs, such as Ollama,
May I ask if it is possible to call the other LLM through API? If so, how can I configure it
Thanks
`python tests/te…
-
Hi,
Looking into https://github.com/PLangHQ/plang/issues/14
Is there a possibily to get `plang` working with a LLM running locally?
-
### Describe the feature you'd like
The original prompts for generating the tags seem to be ok for GPTx, but using local LLM lead to the fact that the JSON might not be generated as expected by hoard…
-
![b9b846877c545e753d310f5dc4d092d](https://github.com/user-attachments/assets/5aa5ce35-13d1-4056-b971-61e9c463e9ab)
-
I deployed Qwen2.5-14B-Instruct on my local server and started llm correctly using vllm.
But when I executed the sample code,
```
from paperqa import Settings, ask
local_llm_config = dict(
…
-
Instead of using OpenAI (#69), we want to use a local model that runs on the device (makes it free!).
-
### Discussed in https://github.com/bmachek/lrc-ai-assistant/discussions/3
Originally posted by **FA-UC-HR** November 15, 2024
What do you think about using local / self hosted llms? Like olla…
-
* langchain-community