flexigpt / vscode-extension

FlexiGPT plugin for VSCode. Interact with AI models as a power user
https://flexigpt.site
MIT License
30 stars 7 forks source link
ai chatgpt gpt-4 llm pair-programming stackoverflow vscode-extension

FlexiGPT


Table of Contents

Introduction

Interact with GPT AI models as a power user.

Getting started

Installation

Download from: VSCode Marketplace and follow instructions.

OR

Steps:

  1. Install Visual Studio Code 1.74.0 or later
  2. Launch Visual Studio Code
  3. From the command palette Ctrl-Shift-P (Windows, Linux) or Cmd-Shift-P (macOS), run > Extensions: Install Extension.
  4. Choose the extension FlexiGPT by ppipada.
  5. Restart Visual Studio Code after the installation

Configuration

FlexiGPT level configuration

Options:

Sample Full configuration

// flexigpt basic configuration
"flexigpt.promptFiles": "/home/me/my_prompt_files/myprompts.js",
"flexigpt.inBuiltPrompts": "gobasic.js;gosql.js",
"flexigpt.defaultProvider": "openai",

// openai provider configuration
"flexigpt.openai.apiKey": "sk-mkey",
"flexigpt.openai.timeout": "120",
"flexigpt.openai.defaultCompletionModel": "gpt-3.5-turbo",
"flexigpt.openai.defaultChatCompletionModel": "gpt-3.5-turbo",
"flexigpt.openai.defaultOrigin": "https://api.openai.com",

// anthropic provider configuration
"flexigpt.anthropic.apiKey": "sk-mkey",
"flexigpt.anthropic.timeout": "120",
"flexigpt.anthropic.defaultCompletionModel": "claude-3-haiku-20240307",
"flexigpt.anthropic.defaultChatCompletionModel": "claude-3-haiku-20240307",
"flexigpt.anthropic.defaultOrigin": "https://api.anthropic.com",

// huggingface provider configuration
"flexigpt.huggingface.apiKey": "hf-mkey",
"flexigpt.huggingface.timeout": "120",
"flexigpt.huggingface.defaultCompletionModel": "bigcode/starcoder2-15b",
"flexigpt.huggingface.defaultChatCompletionModel": "deepseek-ai/deepseek-coder-1.3b-instruct",
"flexigpt.huggingface.defaultOrigin": "https://api-inference.huggingface.co",

// googlegl provider configuration
"flexigpt.googlegl.apiKey": "gl-mkey",
"flexigpt.googlegl.timeout": "120",
"flexigpt.googlegl.defaultCompletionModel": "gemini-1.0-pro",
"flexigpt.googlegl.defaultChatCompletionModel": "gemini-1.0-pro",
"flexigpt.googlegl.defaultOrigin": "https://generativelanguage.googleapis.com",

// llamacpp provider configuration
"flexigpt.llamacpp.apiKey": "",
"flexigpt.llamacpp.timeout": "120",
"flexigpt.llamacpp.defaultOrigin": "127.0.0.1:8080",

AI Providers

OpenAI

Anthropic Claude

Huggingface

Google generative language - Gemini / PaLM2 API

LLaMA cpp

Features

Get Code

Get code using a comment in the editor.

Inbuilt Prompts

Steps to get all the below functionality (similar for all configured prompts; inbuilt or custom):

Refactor selection

Rectify and refactor selected code.

Generate unit test

Create unit test for selected code.

Complete

Complete the selection.

Explain code

Explain the selection.

Generate Documentation

Generate documentation for the selected code.

Find problems

Find problems with the selection, fix them and explain what was wrong.

Optimize selection

Optimize the selected code

Chat

UI behavior

Invocation

The chat activity bar can be opened in following ways:

  1. Click the FlexiGPT icon in the activity bar
  2. Select text/code and right-click i.e editor context: You should get a FlexiGPT: Ask option to click/enter
  3. Command Palette (Ctrl/Cmd + Shift + P): You should get a FlexiGPT: Ask option to click/enter
  4. Keyboard shortcut: Ctrl + Alt + A

Prompts behavior

Search Stack Overflow

Search for stack overflow questions from your editor.

Run Custom CLIs from within editor

Prompting

Prompting

Features

UI behavior

Inbuilt prompts

Prompt files format

Sample prompt files in repo

Simple javascript prompt file

module.exports = {
  namespace: "myprompts",
  commands: [
    {
      name: "Refactor",
      template: `Refactor following function.
            function:
            {system.selection}`,
    },
  ],
};

Complex javascript prompt file

module.exports = {
  namespace: "MyComplexPrompts",
  commands: [
    {
      name: "Create unit test.",
      template: `Create unit test in {user.unitTestFramework} framework for following function.
            code:
            {system.selection}`,
      responseHandler: {
        func: "writeFile",
        args: {
          filePath: "user.testFileName",
        },
      },
      requestparams: {
        model: "gpt-3.5-turbo",
        stop: ["##", "func Test", "package main", "func main"],
      },
    },
    {
      name: "Write godoc",
      template: `Write godoc for following functions.
            code:
            {system.selection}`,
      responseHandler: {
        func: "append",
        args: {
          position: "start",
        },
      },
      requestparams: {
        model: "code-davinci-002",
        stop: ["##", "func Test", "package main", "func main"],
      },
    },
  ],
  functions: [
    // you could also write your own responseHandler.
    // Note that it takes a single object as input.
    function myHandler({ system, user }) {
      console.table({ system });
      console.table({ user });
    },
  ],
  variables: [
    {
      name: "unitTestFramework",
      value: "testing",
    },
    {
      name: "testFileName",
      value: ({ baseFolder, fileName, fileExtension }) =>
        `${baseFolder}\\${fileName}_test${fileExtension}`,
    },
  ],
  cliCommands: [
    {
      name: "Go generate all",
      command: `go generate ./...`,
      description: "Run go generate in the workspace",
    },
  ],
};

Creating Command

Creating Variables

Any of the variables items can be used in a command template. User-defined values must have the "user" prefix. For example, if "testFileName" is defined in variables, it can be used as "user.TestFileName" in the template file or passed to a function.

Variable values can be static or dynamic. For dynamic values, you should create a getter method. When calling the variable getter, a single object with system variables (see Predefined System Variables) is passed as first argument, any other vars can be taken as next args..

module.exports = {
variables: [
    {
        //static
        name: "testingFramework",
        value: "xUnit"
    },
    {
        //dynamic
        name: "typeNameInResponse",
        value: ({ answer/*system variable*/ }, myTestFile/*user defined var*/ ) => {}
    },
]
functions: [
  function extractTypeName({ code, system }) {/**/},
  function myOtherFunc() {},
],
commands: [
    {
        name: "Create DTO",
        template: `Create unit test with {user.testingFramework} for following class.
        class:
        {system.selection}`,
        responseHandler: {
            func: 'writeFile',
            args: {
                filePath: 'user.typeNameInResponse'/*usage for function arg*/
            }
        }
    }
]
}

Predefined System Variables

All vars are case-insensitive.

Variable Name Description
system.selection Selected text in editor
system.question OpenAI question
system.answer OpenAI answer
system.language Programming language of active file
system.baseFolder Project base path
system.fileFolder Parent folder path of active file
system.fileName Name of active file
system.filePath Full path of active file
system.fileExtension Extension of active file
system.commitAndTagList Last 25 commits and associated tags
system.readFile Read the full open editor file. Optionaly pass a filepath as a second argument

Note that the system. prefix for a system variable is optional. Therefore, you can even use only {selection} to use the selected text, or {language} instead of {system.language} for language of your file.

Creating Functions

Predefined System Functions

Function Name Description params(default)
append Append Text textToAppend(system.answer),postion('end')
replace Replace selected text textToReplace(system.answer)
writeFile Write text to file. Append if file exists. filePath(),content(system.answer)

Creating CLI Commands

Requirements and TODO

Functional areas Features and Implementations Status
Flexibility to talk to any AI Integration with multiple AI providers through APIs. Done
Support parameter selection and handle different response structures. Done
Flexibility to use custom prompts Support for prompt engineering that enables creating and modifying prompts via a standard structure. Done
Allow request parameter modification Done
Allow adding custom response handlers to massage the response from AI. Done
Provide common predefined variables that can be used to enhance the prompts Done
Provide extra prompt enhancements using custom variables that can be static or function getters. This should allow function definitions in the prompt structure and integrate the results into prompts. Also allow passing system vars or user vars or static strings as inputs Done
Provide capability to evaluate different prompts, assign ELO ratings, choose and save the strongest Long term
Seamless UI integration Design a flexible UI, a chat interface integrated into the VSCode activity bar. Done
The UI must support saving, loading, and exporting of conversations. Done
Implement streaming typing in the UI, creating a feeling that the AI bot is typing itself. Long term
Adhoc queries/tasks Help the developer ask adhoc queries to AI where he can describe the questions or issues to it using the chat interface. This can be used to debug issues, understand behaviour, get hints on things to look out for, etc. The developer should be able to attach code or files to his questions. Done
Provide a way to define pre cooked CLI commands and fire them as needed. Interface to define CLI commands should be similar to prompts. Done
Provide a way to search queries on StackOverflow. Done
Provide a way to get results for queries from StackOverflow answers and corresponding AI answer. Long term
Code completion and intelligence Provide a way to generate code from a code comment Done
Provide a way to complete, refactor, edit, or optimize code via the chat interface. Should allow selecting relevant code from editor as needed. Done
Implement a context management system integrated with the Language Server Protocol (LSP) that can be used to enrich AI interactions. Medium term
Support generating code embeddings to understand the code context and integrate it into prompts. Medium term
Develop an intelligent code completion feature that predicts next lines of code. It should integrate context (LSP or embeddings) into autocomplete prompts and handle autocomplete responses in the UI. Medium term
Code review and intelligence Provide a way to review via the chat interface. Should allow selecting relevant code from editor as needed Done
Ability to fetch a Merge/Pull request from Github, Gitlab or other version providers, analyse them and provide review comments. Should provide flexibility to specify review areas and associated priority depending on usecase. Medium term
Provide automated code reviews and recommendations. It should provide subtle indicators for code improvements and handle code review API responses in the UI. Long term
Provide automated refactoring suggestions. This should handle refactoring API responses and display suggestions in the UI. Long term
Provide automated security suggestions. This should be able to identify potential vulnerabilities being added or deviations from security best practices used in code. Long term
Code documentation assistance Generate documentation for the selected code using the chat interface. Should allow selecting relevant code from editor as needed. Done
Develop effective inline documentation assistance. It should automatically generate and update documentation based on the code and display it in the UI. Long term
Code Understanding and Learning Support Provide a way to explain code via the chat interface. Should allow selecting relevant code from editor as needed. Done
Develop/Integrate with an integrated knowledge graph to provide detailed explanations of Services, APIs, methods, algorithms, and concepts the developer is using or may want to use Long term
Integrate graph search into prompts Long term
Testing Provide a way to generate unit tests via the chat interface. Should allow selecting relevant code from editor as needed. Should have ability to insert tests in new files or current file as needed. Done
Provide a way to generate API and associated workflow tests via the chat interface. Should allow selecting relevant code/api definitions from editor as needed. Should have ability to insert tests in new files or current file as needed. Short term

License

FlexiGPT is a fully open source software licensed under the MIT license.

Contributions

Contributions are welcome! Feel free to submit a pull request on GitHub.

Support

If you have any questions or problems, please open an issue on GitHub at the issues page.