itszerrin / ChatGptUK-Wrapper

A Flask server which runs locally on your PC but can also run globally. It has OpenAI models such as GPT-3.5, GPT-3.5-16K or even GPT-4.
MIT License
0 stars 0 forks source link

ChatGptUK-Wrapper

Welcome to ChatGptUK-Wrapper, a powerful Flask server that allows you to interact with OpenAI models, including GPT-3.5, GPT-3.5-16K, and even the cutting-edge GPT-4. This versatile wrapper can run locally on your PC or be globally accessible.

Getting Started

Prerequisites

Before you begin, make sure you have the following installed:

Installation

  1. Clone the repository:

    git clone https://github.com/your-username/ChatGptUK-Wrapper.git
  2. Navigate to the project folder:

    cd ChatGptUK-Wrapper
  3. Install dependencies:

    pip install -r requirements.txt

Configuration

The server reads its configuration from the assets/config.json file. You can customize parameters such as host, port, and global access in this file.

{
    "host": "0.0.0.0",
    "port": 5000,
    "debug": false,
    "global": true
}

Usage

  1. Run the Flask app:

    python app.py
  2. Access the server at http://localhost:5000 (adjust the URL based on your configuration).

API Endpoints

1. /chat/completions (POST)

2. /models (GET)

3. / (GET)

Examples

1. Chat Completion (Non-Streamed)

import requests

url = "http://localhost:5000/chat/completions"
data = {
    "messages": [{"role": "user", "content": "Hello, ChatGptUK-Wrapper!"}],
    "model": "gpt-3.5-turbo",
    "temperature": 0.7,
    "presence_penalty": 0.5,
    "frequency_penalty": 0.5,
    "top_p": 1.0,
    "stream": False
}

response = requests.post(url, json=data)
print(response.json())

2. Chat Completion (Streamed) - Example Script

example.py

from assets.src.api import API

# Create an instance of the API
Api = API()

# Define a set of messages for the conversation
messages = [
    {"role": "system", "content": "You are GPT-4. The most advanced chatbot in the world. You have web search capabilities, calculator, and web browser. You can also do translations, and much more."},
    {"role": "user", "content": "Hi, what's the weather like in Washington DC?"},
]

# Stream responses from the API
for chunk in Api.chat(
    messages=messages,
    model="gpt-4-1106-preview",
    temperature=1,
):
    # Print each chunk as it arrives
    print(chunk, end="", flush=True)

# Expected Output: Gradually, words appear as the model processes the input.

This example showcases the flexibility of the ChatGptUK-Wrapper API to stream responses in real-time. Adjust the messages, model, and other parameters according to your specific use case. Feel free to experiment and integrate this functionality into your applications.