nvms / wingman

Your pair programming wingman. Supports OpenAI, Anthropic, or any LLM on your local inference server.
https://marketplace.visualstudio.com/items?itemName=nvms.ai-wingman
ISC License
64 stars 10 forks source link

A highly flexible, customizable, and powerful extension for working with LLMs within Visual Studio Code.

Highlights:

![image](.github/media/diff.png)

Usage

If you're using a local solution, you can skip setting an API key.

Otherwise, you need to configure API keys for whichever providers you plan to use by opening the command pallete (Windows: P, macOS: P) and running the command labeled: "Wingman: Set API key". Select the provider you want to use, enter your API key, and press enter.

Core concepts

There are three concepts that are crucial to understanding how Wingman operates.

It's really not that complicated.

Prompts

A UI is included for prompt management.

Wingman makes your prompts dynamic with support for placeholders.

Current placeholders:

![image](.github/media/promptui.png)

Presets

A UI is included for preset management.

A preset is a provider configuration. It defines the system message, the provider, the API URL, and completion parameters. You can create as many presets as you want and switch between them whenever.

![image](.github/media/presetui.png)

Modes

A UI is included for preset management.

Modes enhance the prompt and preset management experience. A mode is a collection of presets and prompts. Three built-in modes are provided as examples: "Programming", "Creative writing", and "Technical writing", but you are encouraged to create your own.

Modes can have presets assigned to them. Here's why this is useful:

Switching between modes automatically activates the last preset used in that mode.

![image](.github/media/modeswitch.gif)

Development

  1. In /webview: npm run dev. This is a Svelte project that outputs to /extension/dist.
  2. In /extension: npm run build:watch
  3. Run the extension using the debug panel.

TODO