kaiohenricunha / aws-scalable-metabase-deployment

This project focuses on setting up a resilient and scalable Business Intelligence (BI) environment using Metabase on AWS EKS with Fargate, Karpenter and Keda. It's designed as a case study.
2 stars 1 forks source link
autoscaling aws ci-cd docker eks fargate github-actions grafana istio karpenter keda kubernetes metabase prometheus

Scaling Workloads with the Big Savings Quartet: EKS, Fargate, Karpenter, and Keda

Introduction

This project aims to deploy Metabase, an open-source business intelligence tool, using EKS, Fargate, Karpenter, and Keda to achieve efficient scaling and cost savings. The setup includes Terraform for infrastructure management, Istio for service mesh, Prometheus and Grafana for monitoring, and Keda for autoscaling.

Tools Used

Project Structure

.
├── README.md
├── annotations.md
├── assets
│   ├── ekfk-arch.drawio.png
│   ├── keda-dashboard.png
│   └── stack-workflow.png
├── environments
│   ├── dev
│   └── lab
│       ├── backend.tf
│       ├── main.tf
│       ├── outputs.tf
│       ├── providers.tf
│       ├── s3-dynamodb
│       │   └── main.tf
│       └── variables.tf
├── infra
│   ├── backend
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── eks-fargate-karpenter
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── rds
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   └── vpc
│       ├── main.tf
│       ├── outputs.tf
│       └── variables.tf
├── scripts
│   └── calculate_cluster_size.sh
└── stack
    ├── istio
    │   ├── istiod-values.yaml
    │   ├── pod-monitor.yaml
    │   └── service-monitor.yaml
    ├── keda
    │   └── values.yaml
    ├── metabase
    │   ├── metabase-hpa.yaml
    │   ├── metabase-scaling-dashboard.yaml
    │   └── values.yaml
    └── monitoring
        └── values.yaml

Step 1: Local Environment Setup

  1. Clone the devenv repository and set permissions:

    chmod +x *.sh
    ./main.sh
  2. Alternatively, install prerequisites manually:

    • Terraform CLI
    • AWS CLI
    • kubectl
    • kubectx

Step 2: AWS Credentials and Terraform Backend

  1. Run aws configure to set up AWS credentials.
  2. Store AWS credentials in GitHub repository secrets.
  3. Navigate to environments/lab/s3-dynamodb and initialize Terraform:

    terraform init
    terraform apply

Step 3: AWS Infrastructure

Run the appropriate GitHub Actions workflows to set up the VPC, EKS cluster, and other infrastructure components: plan-workflow.yaml and apply-workflow.

Step 4: Metabase Deployment + cluster stack

Run the stack-workflow.yaml workflow.

Step 5: Accessing Services

  1. Get service IP addresses:

    kubectl get svc -A
  2. Access services using their external IPs:

    • Metabase: xxxxx.elb.us-east-1.amazonaws.com
    • Grafana: xxxxx.elb.us-east-1.amazonaws.com
    • Prometheus: xxxxx.elb.us-east-1.amazonaws.com:9090/graph

Step 6: Destroy


For detailed instructions and code, refer to the respective files and directories in the repository. Also, please refer to this medium article for a deeper dive.