AAClause / nvda-OpenAI

An NVDA Add-on for Integration with OpenAI, MistralAI, and OpenRouter APIs
GNU General Public License v2.0
27 stars 9 forks source link
add-on mistralai nvda openai openrouter

NOTE: The future of this add-on is BasiliskLLM (a standalone application and a minimal NVDA add-on). We highly recommend you consider using BasiliskLLM instead of this one.

Open AI NVDA add-on

This add-on designed to seamlessly integrate the capabilities of the Open AI API into your workflow. Whether you're looking to craft comprehensive text, translate passages with precision, concisely summarize documents, or even interpret and describe visual content, this add-on does it all with ease.

The add-on also supports integration with Mistral and OpenRouter services, thanks to their shared API format.

Installation Steps

  1. Navigate to the releases page to find the latest version of the add-on.
  2. Download the latest release from the provided link.
  3. Execute the installer to add the add-on to your NVDA environment.

API Key Configuration

To use this add-on, you need to configure it with an API key from your selected service provider(s) (OpenAI, Mistral AI, and/or OpenRouter. Each provider offers a straightforward process for API key acquisition and integration.

Once you have your API key, the next step is to integrate it with the add-on:

You are now equipped to explore the features of the OpenAI NVDA add-on!

How to Use the Add-on

The Main Dialog

The majority of the add-on's features can be easily accessed via a dialog box, which can be launched by pressing NVDA+G. As an alternative, navigate to the "Open AI" submenu under the NVDA menu and select the "Main Dialog…" item. Within this dialog, you will be able to:

Increase your productivity with shortcuts

To further improve your interaction with the interface, please take note of the following:

All keyboard shortcuts are displayed next to the labels of their corresponding elements.

About Conversation Mode checkbox

The conversation mode checkbox is designed to enhance your chat experience and save input tokens.

When activated (the default setting), the add-on delivers the entirety of the conversation history to the AI model, thereby granting it improved contextual understanding and resulting in more coherent responses. This comprehensive mode does result in higher consumption of input tokens.

Conversely, when the checkbox is left unticked, only the current user prompt is sent to the AI model. Select this mode to direct specific questions or acquire discrete responses, bypassing the need for contextual comprehension and conserving input tokens when the dialogue's history isn't necessary.

You can switch between the two modes at any time during a session.

About the "System prompt" Field

The "System prompt" field is designed to fine-tune the AI model's behavior and personality to match your specific expectations.

Please be aware that the system prompt is included in the AI model's input data, consuming tokens accordingly.

Global Commands

These commands can be used to trigger actions from anywhere on your computer. It is possible to reassign them from Input Gestures dialog under Open AI category.

Included Dependencies

The add-on comes bundled with the following essential dependencies: