🤖️ Turn your WeChat into ChatGPT within only 2 steps! 🤖️
This project is implemented based on this amazing project that I contibuted before, with Wechaty SDK
and OpenAI API
, we achieve:
gpt-4o
and gpt-3.5-turbo
which powers ChatGPT
Railway
OpenAI
ModelsYou can deploy in local or on cloud, whatever you want.
The deploy on cloud method is recommended.
openaiApiKey
can be generated in the API Keys Page in your OpenAI accountopenaiOrganizationID
is optional, which can be found in the Settings Page in your Open AI accountYou can copy the template config.yaml.example
into a new file config.yaml
, and paste the configurations:
openaiApiKey: "<your_openai_api_key>"
openaiOrganizationID: "<your_organization_id>"
chatgptTriggerKeyword: "<your_keyword>"
Or you can export the environment variables listed in .env.example
to your system, which is a more encouraged method to keep your OpenAI API Key
safe:
export OPENAI_API_KEY="sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
export OPENAI_ORGANIZATION_KEY="org-XXXXXXXXXXXXXXX"
export CHATGPT_TRIGGER_KEYWORD="Hi bot:"
Please note:
chatgptTriggerKeyword
is the keyword which can trigger auto-reply:
@Name <keyword>
will trigger auto-replychatgptTriggerKeyword
can be empty string, which means:
docker build -t chatgpt-on-wechat .
docker run -v $(pwd)/config.yaml:/app/config.yaml chatgpt-on-wechat
You can also build with Docker Compose:
docker-compose up -d
docker-compose logs -f
Once you deploy the bot successfully, just follow the terminal
or Logs
in Docker container prompt carefully:
🤖 Enjoy your powerful chatbot! 🤖
Click the button below to fork this repo and deploy with Railway!
Railway
Fill in the following blanks:
Please note:
Make sure the environment variables are set in RailWay instead of writing directly in config.yaml
. It's really NOT recommended to implicitly write out your OpenAI API Key
in public repo. Anyone with your key can get access to the OpenAI API services, and it's possbile for you to lose money if you pay for that.
Railway
The deploy process is automatic. It may take a few minutes for the first time. As you see the Success
, click the tab to see the details. (which is your secret WeChat console!)
Click Deply Logs
and you will see everything is setting up, wait for a QR Code to pop up. Scan it as if you are login to your desktop WeChat, and click "Log in" on your mobile WeChat.
Finally, everything is good to go! You will see the logs when people sending you messagem, and whenever the chatbot is auto-triggered to reply.
One-click deployment on Alibaba Cloud ComputeNest:
Follow the deployment guide to deploy ChatGPT-on-WeChat on Alibaba Cloud. Both domestic site and internationl sites are supported.
First, provides cloud resource configurations such as ECS instance type and network configurations. Also needs to set ChatGPT-On-WeChat software configuration.
When you confirm to deploy, Alibaba Cloud ComputeNest creates ECS instance in your owner Alibaba Cloud account, deploys ChatGPT-on-WeChat application and starts it on ECS instance automatically.
After ComputeNest service instance is deployed, check "How to use" about how to login to ECS instance.
Run command in ECS workbench to get the QR code.
Scan it as if you are login to your desktop WeChat, and click "Log in" on your mobile WeChat.
Finally, everything is good to go! You will see the logs when people sending you messagem, and whenever the chatbot is auto-triggered to reply.
When the OpenAI API encounters some errors (e.g. over-crowded traffic, no authorization, ...), the chatbot will auto-reply the pre-configured message.
You can change it in src/chatgpt.js
:
const chatgptErrorMessage = "🤖️:ChatGPT摆烂了,请稍后再试~";
OpenAI
ModelsYou can change whatever OpenAI
Models you like to handle task at different capability, time-consumption and expense trade-off. (e.g. model with better capability costs more time to respond)
Currently, the latest GPT-4o
model is up and running!
Since the latest gpt-4
model is currently in a limited beta and only accessible to those who have been granted access, currently we use the gpt-3.5-turbo
model as default. Of course, if you have the access to gpt-4
API, you can just change the model to gpt-4
without any other modification.
According to OpenAI doc,
GPT-4o (“o” for “omni”) is our most advanced model. It is multimodal (accepting text or image inputs and outputting text), and it has the same high intelligence as GPT-4 Turbo but is much more efficient—it generates text 2x faster and is 50% cheaper. Additionally, GPT-4o has the best vision and performance across non-English languages of any of our models.
GPT-3.5 models can understand and generate natural language or code. Our most capable and cost effective model in the GPT-3.5 family isgpt-3.5-turbo
which has been optimized for chat but works well for traditional completions tasks as well.
Also, for the same model, we can configure dozens of parameter (e.g. answer randomness, maximum word limit...). For example, for the temperature
field:
Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
You can configure all of them in src/chatgpt.js
:
chatgptModelConfig: object = {
// this model field is required
model: "gpt-4o",
// add your ChatGPT model parameters below
temperature: 0.8,
// max_tokens: 2000,
};
For more details, please refer to OpenAI Models Doc.
You can change whatever features you like to handle different types of tasks. (e.g. complete text, edit text, generate code...)
Currently, we use createChatCompletion()
powered by gpt-4o
model, which:
take a series of messages as input, and return a model-generated message as output.
You can configure in src/chatgpt.js
:
const response = await this.openaiApiInstance.createChatCompletion({
...this.chatgptModelConfig,
messages: inputMessages,
});
For more details, please refer to OpenAI API Doc.
You can add your own task handlers to expand the ability of this chatbot!
In src/chatgpt.ts
ChatGPTBot.onCustimzedTask()
, write your own task handler:
// e.g. if a message starts with "Hello", the bot sends "World!"
if (message.text().startsWith("Hello")) {
await message.say("World!");
return;
}
Error Log:
uncaughtException AssertionError [ERR_ASSERTION]: 1 == 0
at Object.equal (/app/node_modules/wechat4u/src/util/global.js:53:14)
at /app/node_modules/wechat4u/src/core.js:195:16
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
code: 2,
details: 'AssertionError [ERR_ASSERTION]: 1 == 0\n' +
' at Object.equal (/app/node_modules/wechat4u/src/util/global.js:53:14)\n' +
' at /app/node_modules/wechat4u/src/core.js:195:16\n' +
' at processTicksAndRejections (node:internal/process/task_queues:96:5)'
}
Solution:
<keyword>
@Name <keyword>
You are more than welcome to raise some issues, fork this repo, commit your code and submit pull request. And after code review, we can merge your contribution. I'm really looking forward to develop more interesting features!
Also, there're something in the to-do list for future enhancement:
LangChain
):
DALL·E
model for AI image creation. Triggered by customized keyword (e.g. Hi bot, draw...)Whisper
model for speech recognition. Triggered by voice messages and do transcription or translationGreat thanks to: