Obsidian Local LLM is a plugin for Obsidian that provides access to a powerful neural network, allowing users to generate text in a wide range of styles and formats using a local LLM from the LLAMA family.
The plugin allows users to input a prompt as a canvas block and receive answers in the new block. The LLM can be configured to use a variety of models and settings, allowing users to customize the output to their specific needs.
See gallery below.
The plugin uses a server from llama-cpp-python as the API backend, which relies on the llama.cpp library. The plugin also requires Python 3.7 or later and the pip package manager to be installed on your system.
To install llama-cpp-python and its dependencies, run the following command:
pip install llama-cpp-python[server]
Folow the link to the repository, click 'Show Table with models' and choose the model which is suitable for you and in the 'ggml' format.
For now, is available two ways how to install the plugin
git clone https://github.com/zatevakhin/obsidian-local-llm
cd obsidian-local-llm
npm install
npm build
obsidian-local-llm
directory in your vault $HOME/MyObsidian/.obsidian/plugins
main.js
, manifest.json
, and styles.css
into that directoryTo use the plugin, follow these steps:
export MODEL=/.../ggml-model-name.bin
python3 -m llama_cpp.server
obsidian-local-llm-canvas-typewriter.webm
Contributions to the plugin are welcome! If you would like to contribute, please fork the repository and submit a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.