This plugin allows Obsidian users to access local and web LLMs. Local LLMs are available via GPT4All. Currently, OpenAI models are the only web-based LLMs available for use in the plugin. We are working on adding additional support for Google Gemini in the near future.
This plugin in Beta and still under development so installation can be done either through the use of another Obsidian Plugin, Beta Reviewers Auto-update Tester (BRAT) - Quick guide for using BRAT (Recommended)
BRAT: Add a beta plugin for testing
or by cloning the repo:
node --version
).npm i
or yarn
to install dependencies.npm run build
to build the plugin.GPT4All
In order to use the GPT4All LLMs you must have the desktop client from NomicAI downloaded to your local computer and have at least one GPT4All LLM downloaded.
We currently have local doc functionality working for GPT4All models which allows users to add their Obsidian Vault to the GPT4All client and allows for chatting with models about local notes.
OpenAI
The OpenAI models we currently support come pre-loaded in the plugin. They include the chat models and image generation models. We are working on epxanding access to most if not all of the OpenAI endpoints.
In order to access these models, you will need to have an OpenAI account with a generated API Key and credits allowing you to make API calls. You can either generate an API Key in the LLM Plugin settings using the "Generate Token" button, or just add your API Key to the input bar. Once your API Key is input, you should have full access to the OpenAI models that we support.
Users are able to access LLMs through a variety of ways: a modal, a floating action button (FAB), and a sidebar widget. The FAB can be toggled on and off through the plugin settings or through the command pallate. The widget can be used in the sidebar or in the place of a note tab.
In each of the views, you have access to Model Settings, Chat History, and New Chat options
Clicking the settings, or history button switches to that tab in the plugin view, to get back to the prompt tab, simply click on the highlighted button again.
The Assistants API allows you to build AI assistants within your own applications. An Assistant has instructions and can leverage models, tools, and files to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, File Search, and Function calling. Our team currently only supports the File Search application.
In order to use this tool, you first must navigate to the Assistants Creation tab in any of our three views by clicking on the Robot Icon in the header.
You can fill out this form however you like, there's just a couple things to note
You can add a single file or multiple files
Once you have created an assistant, the form will become blank again and you can head over to the settings tab where you will now see your assistant listed in the models drop down. Selecting your assistant will allow you to interact with the files you just uploaded by going back to the prompt screen
GPT4All makes creating a context out of all your vaults notes and bringing the information you have from files on-device into your LLM chats simple with just a few installation steps.
Tag your code:
git tag 0.19.16
Push your tag:
git push origin 0.19.6
Generate the build assets on your local machine
npm run build
main.js
, manifest.json
, and styles.css
to your releaseBack on your local machine...
Make sure you have the community plugin BRAT installed
Inside of BRAT click Add Beta Plugin
Add Plugin