This repository hosts the backend code for a chatbot integrated with a website built using Notion. The backend is built using the Flask framework and hosted on Render. It handles API calls to OpenAI's GPT-4-mini model and returns chatbot responses, which are displayed in a chatbox embedded on the front end.
To get started, clone this repository to your local machine:
git clone https://github.com/your_username/chatbot-backend.git
cd chatbot-backend
Before running the application, ensure all necessary Python dependencies are installed. You can install them using the following command:
pip install -r requirements.txt
export OPENAI_API_KEY="your_openai_api_key"
To test the backend locally, you can run the Flask app using the gunicorn command. The backend listens for requests and processes them through the OpenAI API. Use the following command to start the server:
gunicorn --bind 0.0.0.0:10000 api.backend:app
To deploy this backend to Render, follow these steps:
1. Go to Render and create an account if you haven’t already.
2. Create a new web service and link it to this GitHub repository.
3. Set the build command to:
pip install -r requirements.txt
4. Set the start command to:
gunicorn --bind 0.0.0.0:10000 api.backend:app
5. Add the OPENAI_API_KEY environment variable in the Environment section.
The frontend of this project is embedded in a Notion page. Since Notion doesn’t support native JavaScript or HTML directly, we embed a chatbox created using HTML, CSS, and JavaScript.
Follow these steps to integrate your frontend:
1. Create an HTML file that includes the chatbox design, some CSS for styling, and JavaScript to handle requests to the backend.
2. Push this into your github.io website
3. Embed the link to the github.io website in the Notion page.
Check out my website repository for the frontend code: AI-assistant-frontend