No speedup. MacBook Pro 13, M1, 16GB, Ollama, orca-mini.
Local GPT assistance for maximum privacy and offline access.
The plugin allows you to open a context menu on selected text to pick an AI-assistant's action.
Also works with images
No speedup. MacBook Pro 13, M1, 16GB, Ollama, bakllava.
Default actions:
You can also add yours, share the best actions or get one from the community.
Supported AI Providers:
This plugin is available in the Obsidian community plugin store https://obsidian.md/plugins?id=local-gpt
You can also install this plugin via BRAT: pfrankov/obsidian-local-gpt
ollama pull gemma2
or any preferred model from the library.Additional: if you want to enable streaming completion with Ollama you should set environment variable OLLAMA_ORIGINS
to *
:
launchctl setenv OLLAMA_ORIGINS "*"
.There are several options to run local OpenAI-like server:
+
icon and press hotkey (e.g. ⌘ + M
)It is also possible to specify a fallback to handle requests — this allows you to use larger models when you are online and smaller ones when offline.
Since you can provide any OpenAI-like server, it is possible to use OpenAI servers themselves.
Despite the ease of configuration, I do not recommend this method, since the main purpose of the plugin is to work with private LLMs.
OpenAI compatible server
in Selected AI provider
OpenAI compatible server URL
to https://api.openai.com
API key
from the API keys pagegpt-3.5-turbo
)