Open av opened 3 months ago
Curious about your use case for local LLMs vs OpenAI ?
Usage in no-network conditions, data protection, ability to choose specific models for specific kind of workload (for example fine-tuned)
Nothing unique, just a "local LLM" use-case
Thanks for sharing!
Local Redaction and support for local LLMs is planned and I'm tracking it on our public feedback board here
Hi @av,
quick update:
I've moved away from using OpenAI for generating runbooks and we now use Llama3.1 hosted on Groq.
Savvy ask/explain still uses GPT4-o for now.
Hi, thanks for building and opening Savvy!
Is there any way I can configure it to use a locally-running LLM? With OpenAI-compatible API or otherwise.
Thanks!