Introducing Stepupsage.ai - Your AI-driven Cloud Deployment Companion! Leveraging the power of AI, Deploy.ai transforms the way you estimate costs, plan, and migrate applications to the AWS Cloud. This project is divided into three parts, this is the main repository and there are three other repositories which are the subset of this repository.
AI-Powered Interactions: Integrated with OpenAI API, Deploy.ai converses with users to gather specific application requirements.
Cost Estimation: Connects with the AWS Cost Calculator API to furnish detailed monthly or yearly cost estimates for the desired AWS services.
Deployment Planning: Generates a JSON-based deployment plan (file.json), tailored to user-provided specifications.
AWS Architecture Visualization: Automatically creates and shares a comprehensive AWS architecture diagram, visualizing the planned services.
Terraform Automation: Utilizes file.json to provision infrastructure on AWS, tapping into powerful Terraform modules for infrastructure as code.
Ansible Configuration: Employs Ansible for meticulous environment setup and application deployment, ensuring consistency across AWS services.
Health Monitoring: Post-deployment, Deploy.ai conducts health checks and establishes a monitoring dashboard using Grafana for real-time application insights.
Progress Tracking: Offers the ability to save progress and issues Jira tickets to track deployment stages.
Initial Setup: Users land on the homepage and register via AWS Cognito.
Engagement: The AI ChatBot engages users, querying about application specifics and desired AWS resources.
Planning & Costing: A JSON deployment plan is generated, and a cost estimation is provided, alongside an AWS architecture diagram.
User Decision Point: Users decide whether to proceed with the deployment.
Yes: Users provide AWS credentials, and Deploy.ai begins infrastructure provisioning with Terraform, followed by Ansible for deployment.
No: Progress is saved, and the ChatBot offers to restart the conversation or save the session for later.
Deployment & Monitoring: Once deployed, application health is checked, and a monitoring dashboard is set up.
Completion Notification: A comprehensive notification is sent to the user via Slack and email, detailing the deployment and next steps.
Multi-AZ Deployment: Ensures high availability by spanning across multiple Availability Zones.
Secure Authentication: Leverages AWS Cognito for secure user sign-in.
CI/CD Integration: Integrates with GitHub Actions and Docker Hub for continuous integration and deployment.
Centralized Monitoring: Utilizes CloudWatch and SNS for detailed monitoring and alerts.
We are using an ec2 instance with an elastic IP attached to it. Github actions is used to update the local repository. Nginx is set up with a domain name. We are also using cloudflare for this purpose. For SSL we are using certbot.
We are using the AWS's Well Architected Framework to create this project which ensures operational excellence, performance efficency, reliability, sustainability, security and cost optimization.
Frontend of our Application:
ChatBot Interaction: Users interact with our ChatBot to effortlessly provide essential details related to their source code/application. This interaction streamlines the deployment process, ensuring seamless deployment on AWS with just one click. Backend of the Application:
Jenkins Pipeline1 - Infrastructure Creation:
This pipeline triggers a Terraform script tailored to create AWS infrastructure aligned with the requirements of the source code. It encompasses provisioning resources like EC2 instances, security groups, Load Balancers, and other essential components. Jenkins Pipeline2:
Stage 1: Within Pipeline2, Stage 1 executes a Python Boto3 script to fetch EC2 IP addresses and Ansible user information. This data populates a hosts.ini file crucial for Ansible playbook execution. Stage 2: Ensures secure storage of requisite files (.pem and .yml) such as SSH private keys and Ansible playbooks on the Ansible server. Stage 3: Executes the Ansible playbook to configure EC2 instances with the source code and requisite software dependencies. Manual Configuration in Cloudflare: Involves manually configuring DNS addresses for the Load Balancer in Cloudflare to effectively manage DNS and incoming traffic routed to our AWS infrastructure. This step ensures seamless and optimized traffic flow to our application.
Things To Do in Future scope:
Future Plans: Currently in the json_read.py file, we are creating infrastructure and config based on conditions written for each json deployment case.
But later, we would like to create an AI model that will be used to automatically figure out the corresponding codes to run by reading the Terraform scripts.
Or even better, we would build the AI model by training it on various TF scripts so that it would automatically create the Terraform scripts.
With the input provided by the user, this AI model will detect the type of deployment.
By analyzing the JSON file, this AI model would be able to identify patterns and dependencies within the infrastructure and configuration requirements. This would eliminate the need for manual coding and reduce human errors, resulting in a more efficient and accurate deployment process. Additionally, the AI model could continuously learn and adapt to new deployment scenarios, enhancing its capabilities over time.
The AI model would then translate these natural language inputs into the appropriate Terraform scripts, further simplifying the deployment process for users. deployment case in the json_read.py file.
Discussing the future vision: Explain how there is a plan to develop an AI model that can automatically determine the corresponding codes to run by reading Terraform scripts, which would eliminate the need for manual intervention.
Highlighting improved efficiency: Elaborate on how building an AI model by training it on various TF scripts would lead to greater automation and efficiency in creating Terraform scripts, as it would be able to generate them automatically.
Describing user interaction: Write about how this proposed AI model could incorporate user input to detect different types of deployments accurately, making it more versatile and adaptable for various scenarios.
Addressing potential benefits: Explore the potential advantages of implementing such an AI-driven system, including reduced human error, a faster script generation process, scalability, and better resource allocation based on specific deployment requirements.