No speedup. MacBook Pro 13, M1, 16GB, Ollama, orca-mini.
Local GPT assistance for maximum privacy and offline access.
The plugin allows you to open a context menu on selected text to pick an AI-assistant's action.
The most casual AI-assistant for Obsidian.
Also works with images
No speedup. MacBook Pro 13, M1, 16GB, Ollama, bakllava.
Also it can use context from links, backlinks and even PDF files (RAG)
1. Install Embedding model:
ollama pull nomic-embed-text
(fastest)ollama pull bge-m3
(slower, but more accurate)2. Select Embedding model in plugin's settings and try to use the largest model with largest context window.
You can also add yours, share the best actions or get one from the community.
This plugin is available in the Obsidian community plugin store https://obsidian.md/plugins?id=local-gpt
You can also install this plugin via BRAT: pfrankov/obsidian-local-gpt
ollama pull gemma2
or any preferred model from the library.Additional: if you want to enable streaming completion with Ollama you should set environment variable OLLAMA_ORIGINS
to *
:
launchctl setenv OLLAMA_ORIGINS "*"
.There are several options to run local OpenAI-like server:
+
icon and press hotkey (e.g. ⌘ + M
)It is also possible to specify a fallback to handle requests — this allows you to use larger models when you are online and smaller ones when offline.
Since you can provide any OpenAI-like server, it is possible to use OpenAI servers themselves.
Despite the ease of configuration, I do not recommend this method, since the main purpose of the plugin is to work with private LLMs.
OpenAI compatible server
in Selected AI provider
OpenAI compatible server URL
to https://api.openai.com/v1
API key
from the API keys pagegpt-4o
)