zeitlings / alfred-ollama

Dehydrated Ollama CLI Interface
https://github.com/zeitlings/ayai-gpt-nexus
29 stars 1 forks source link
alfred-workflow inference ollama ollama-interface

Alfred Ollama

Dehydrated Ollama Command Line Interface interface to

  1. Manage your local language models
  2. Perform local inference through Alfred

Requirements

The Ollama macOS application, at least one installed model for chat or other inference tasks, and the Xcode Command Line Tools.[^1] To modify or add custom inference actions to the workflow's universal action, install pkl, edit the configuration file and build the inference tasks.

Usage

Manage your local models or chat with them via the ollama keyword. Alternatively, define Hotkeys for quick access.

Local Models

Loaded Models

New Models

Type to match models based on your query.

Model Versions

Type to match versions based on your query.

Pulling Models

Local Chat

Chat History

Inference Actions

Inference Actions provide a suite of language tools for text generation and transformation. These tools enable summarization, clarification, concise writing, and tone adjustment for selected text. They can also correct spelling, expand and paraphrase text, follow instructions, answer questions, and improve text in other ways.

Access a list of all available actions via the Universal Action or by setting the Hotkey trigger.

[!IMPORTANT] Make sure you only use this if the frontmost UI Element accepts text.
There are no security checks in place at the moment.


Links:

Footnotes
[^1]: xcode-select --install