itsme2417 / PolyMind

A multimodal, function calling powered LLM webui.
GNU Affero General Public License v3.0
205 stars 15 forks source link

PolyMind

PolyMind is a multimodal, function calling powered LLM webui. It's designed to be used with Mixtral 8x7B-Instruct/Mistral-7B-Instruct-v0.2 + TabbyAPI, but can be used with other models and/or with llama.cpp's included server and, when using the compatiblity mode + tabbyAPI mode, any endpoint with /v1/completions support, and offers a wide range of features including:

90% of the web parts (HTML, JS, CSS, and Flask) are written entirely by Mixtral.

Note: The python interpreter is intentionally delayed by 5 seconds to make it easy to check the code before its ran.

Note: When making multiple function calls simultaneously, only one image can be returned at a time. For instance, if you request to generate an image of a dog using comfyui and plot a sine wave using matplotlib, only one of them will be displayed.

Note: When using RAG, make it clear that you are requesting information according to the file you've uploaded.

Installation

  1. Clone the repository: git clone https://github.com/itsme2417/PolyMind.git && cd PolyMind
  2. Install the required dependencies: pip install -r requirements.txt
  3. Install the required node modules: cd static && npm install
  4. Copy config.example.json as config.json and fill in required settings.

For the ComfyUI stablefast workflow, make sure to have ComfyUI_stable_fast installed. For the img2img workflow, make sure to have comfyui-base64-to-image installed.

Usage

To use PolyMind, run the following command in the project directory:

python main.py

There are no "commands" or similar as everything is done via function calling. Clearing the context can be done by asking the model to do so, along with the Enabled features which can be disabled or enabled temporarily in the same way.

For plugins check The plugins directory

For an example on how to use polymind as a basic API Server check Examples

Configuration

The application's configuration is stored in the config.json file. Here's a description of each option:

Donations

Patreon: https://www.patreon.com/llama990

LTC: Le23XWF6bh4ZAzMRK8C9bXcEzjn5xdfVgP

XMR: 46nkUDLzVDrBWUWQE2ujkQVCbWUPGR9rbSc6wYvLbpYbVvWMxSjWymhS8maYdZYk8mh25sJ2c7S93VshGAij3YJhPztvbTb

If you want to mess around with my llm discord bot or join for whatever reason, heres a discord server: https://discord.gg/zxPCKn859r

Screenshots

screenshot0 screenshot1 screenshot2 screenshot3 screenshot4 screenshot5