Cyberworld-builders / Rapid-spa

Experimenting with rapid deployments of web apps
0 stars 0 forks source link

Scope the rapid deployment of a static site from end to end #1

Open jaylong255 opened 2 months ago

jaylong255 commented 2 months ago

Here's a step-by-step guide with Terraform code to deploy a simple Single Page Application (SPA) to Google Cloud Storage with Cloud CDN enabled:

# Configure the Google Cloud provider
provider "google" {
  project = "your-project-id"
  region  = "us-central1"
}

# Create a storage bucket for the SPA
resource "google_storage_bucket" "spa_bucket" {
  name          = "your-unique-bucket-name-for-spa"
  location      = "US"
  force_destroy = true

  website {
    main_page_suffix = "index.html"
    not_found_page   = "404.html"
  }

  # This makes the bucket content publicly accessible
  uniform_bucket_level_access = true
}

# Make all objects in the bucket publicly readable
resource "google_storage_bucket_iam_member" "member" {
  bucket = google_storage_bucket.spa_bucket.name
  role   = "roles/storage.objectViewer"
  member = "allUsers"
}

# Upload files to the bucket - you might need to adjust this based on your SPA structure
resource "google_storage_bucket_object" "static_files" {
  for_each = fileset("${path.module}/path-to-your-spa/", "*")
  bucket   = google_storage_bucket.spa_bucket.name
  name     = each.value
  source   = "${path.module}/path-to-your-spa/${each.value}"
}

# Enable Cloud CDN for the bucket
resource "google_compute_backend_bucket" "spa_backend" {
  name        = "spa-backend-bucket"
  description = "Contains the SPA files"
  bucket_name = google_storage_bucket.spa_bucket.name
  enable_cdn  = true
}

# Create a global IP address for the CDN
resource "google_compute_global_address" "default" {
  name = "cdn-static-ip"
}

# Create an HTTP URL map
resource "google_compute_url_map" "urlmap" {
  name            = "spa-url-map"
  default_service = google_compute_backend_bucket.spa_backend.self_link
}

# Create an HTTP proxy
resource "google_compute_target_http_proxy" "default" {
  name    = "http-lb-proxy"
  url_map = google_compute_url_map.urlmap.id
}

# Forwarding rule for External Network Load Balancing using the global IP
resource "google_compute_global_forwarding_rule" "default" {
  name       = "http-content-rule"
  target     = google_compute_target_http_proxy.default.self_link
  ip_address = google_compute_global_address.default.address
  port_range = "80"
}

# Output the CDN URL
output "cdn_url" {
  value = "http://${google_compute_global_address.default.address}"
}

Notes:

  1. Project ID: Replace "your-project-id" with your actual Google Cloud project ID.

  2. Bucket Name: Ensure "your-unique-bucket-name-for-spa" is globally unique across all of Google Cloud Storage.

  3. SPA Path: Replace "/path-to-your-spa/" with the actual path to where your built SPA files are located. This script assumes all files in this directory should be uploaded. If your structure is different, you might need to adjust the fileset function or use multiple google_storage_bucket_object resources.

  4. Security: This setup makes your bucket publicly accessible, which is necessary for a public SPA but be cautious with sensitive data.

  5. Terraform Version: Ensure you're using a version of Terraform that supports these resources, particularly the for_each in google_storage_bucket_object.

  6. State Management: Remember to manage your Terraform state appropriately, especially in a production environment.

  7. Error Handling: In a real-world scenario, add more error handling and possibly use modules for better organization.

This Terraform script sets up everything you need for hosting an SPA with CDN capabilities on Google Cloud. After running this, you'll get an output with the CDN URL where your SPA is accessible.

jaylong255 commented 2 months ago

To package Terraform into a Docker container, you'll need to create a Dockerfile that installs Terraform, sets up any necessary environment, and then runs your Terraform commands. Here's how you can structure your Dockerfile:

# Use an official HashiCorp Terraform image as a base
FROM hashicorp/terraform:1.0.0

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install Google Cloud SDK for Google provider
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
    curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && \
    apt-get update && apt-get install -y google-cloud-sdk

# Initialize Terraform - this step can be skipped if you're using remote state or if it's not necessary here
# RUN terraform init

# Command to run when the container starts
CMD ["sh", "-c", "terraform apply -auto-approve"]

Here's an explanation of the Dockerfile:

Steps to Use:

  1. Create a Dockerfile: Place this Dockerfile in the same directory as your Terraform files.

  2. Build the Docker Image:

    docker build -t my-terraform-gcp-app .
  3. Run the Docker Container:

    docker run -e GOOGLE_CLOUD_PROJECT="your-project-id" \
    -e GOOGLE_APPLICATION_CREDENTIALS="/app/your-service-account-key.json" \
    my-terraform-gcp-app
    • Replace "your-project-id" with your Google Cloud project ID.
    • Ensure your service account key JSON file is copied into the directory and named appropriately, or adjust the path in the environment variable.

    Note: You'll need to manage authentication. One common approach is to mount a volume with your credentials or pass them as environment variables securely. The above example assumes you've somehow included your credentials in the container, which isn't ideal for production. In a real scenario, use Docker secrets or environment variables for sensitive information.

  4. Authentication: Make sure your Google Cloud authentication is handled correctly. You might need to set up authentication within the container or pass credentials securely.

This setup encapsulates your Terraform environment within Docker, ensuring you don't need Terraform installed on the host machine, and it helps in maintaining consistency across different environments. However, remember to handle sensitive information like credentials with care in production environments.