-
This issue is to explain how to host locally the LLM model.
For all the solutions listed below, `ngrok.com` (or any similar tool) can be used to share the local AI server to other people.
We ha…
-
I want to use my ollama server on my laptop instead of sending data to OpenAI
-
It would be nice if gitlens's AI features could integrate with LLMs running locally. For example via [ollama](https://github.com/ollama/ollama). Not everybody can use the cloud for one reason or anoth…
-
Use some Local AI to generate some cool speech phrases for the house.
https://www.youtube.com/watch?v=sfcM-bfFyP4&t=8s
https://docs.litellm.ai/docs/providers/ollama
-
[Ollama Development](https://github.com/ollama/ollama/blob/main/docs/development.md)
```nix
environment.systemPackages = [
pkgs.ollama
];
```
TODO:
- [ ] Do I need to install [RO…
-
## To Reproduce
1. Spin up docker containers via Supabase CLI.
2. Go to the 'SQL Editor'.
3. Try chatting with AI.
[1720546308.webm](https://github.com/supabase/supabase/assets/38112087/746e50…
-
I think I basically summed it up in title. Or is there and im just not seeing it?
Fau57 updated
2 weeks ago
-
Using OpenAI on larger codebases and rerunning this often could cause no small cost and also might prevent people from using it.
Ollama has added full support for the OpenAI API for running local m…
-
Your project caught my attention. Feel free to check out my project on my github as well. Would it be possible to adjust your code to work with a local LLM instead of through gPT-4?
-
# Master key to keep track of access
> MASTER_KEY = "master_key"