gpt-3.5-turbo
API for Free (as a Reverse Proxy)Welcome to the ChatGPT API Free Reverse Proxy, offering free self-hosted API access to ChatGPT (gpt-3.5-turbo
) with OpenAI's familiar structure, so no code changes are needed.
docker run -dp 3040:3040 pawanosman/chatgpt:latest
http://localhost:3040/v1/chat/completions
Note that the base URL is http://localhost:3040/v1
.
✅ You can run third-party chat web interfaces, such as BetterChatGPT and LobeChat, with this API using Docker Compose. Click here for the installation guide.
To install and run the ChatGPT API Reverse Proxy on your PC/Server by following these steps:
Note: This option is not available to all countries yet. if you are from a country that is not supported, you can use a U.S. VPN or use our hosted API.
git clone https://github.com/PawanOsman/ChatGPT.git
start.bat
(Windows) or start.sh
(Linux with bash start.sh
command) to install dependencies and launch the server.http://localhost:3040/v1/chat/completions
Note that the base url will be http://localhost:3040/v1
To include installation instructions for Termux on Android devices, you can add the following section right after the instructions for Linux in the Installing/Self-Hosting Guide:
To install and run the ChatGPT API Reverse Proxy on Android using Termux, follow these steps:
Install Termux from the Play Store.
Update Termux packages:
apt update
Upgrade Termux packages:
apt upgrade
Install git, Node.js, and npm:
apt install -y git nodejs
Clone the repository:
git clone https://github.com/PawanOsman/ChatGPT.git
Navigate to the cloned directory:
cd ChatGPT
Start the server with:
bash start.sh
Your local server will now be running and accessible at:
http://localhost:3040/v1/chat/completions
Note that the base url will be http://localhost:3040/v1
You can now use this address to connect to your self-hosted ChatGPT API Reverse Proxy from Android applications/websites that support reverse proxy configurations, on the same device.
Utilize our pre-hosted ChatGPT-like API for free by:
#Bot
channel with the /key
command.https://api.pawan.krd/v1/chat/completions
Leverage the same integration code as OpenAI's official libraries by simply adjusting the API key and base URL in your requests. For self-hosted setups, ensure to switch the base URL to your local server's address as mentioned above.
import openai
openai.api_key = 'anything'
openai.base_url = "http://localhost:3040/v1/"
completion = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "How do I list all files in a directory using Python?"},
],
)
print(completion.choices[0].message.content)
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: "anything",
baseURL: "http://localhost:3040/v1",
});
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-3.5-turbo',
});
console.log(chatCompletion.choices[0].message.content);
This project is under the AGPL-3.0 License. Refer to the LICENSE file for detailed information.