dayton-outar / shillings

An attempt at an immersive stock-trading platform
0 stars 0 forks source link

Build Helm Chart #73

Open dayton-outar opened 1 month ago

dayton-outar commented 1 month ago

Currently docker compose up -d is used to bring up an entire distribution of applications but the script to restore the database needs to be integrated. Furthermore, instead of using docker-compose, helm should be used to deploy the apps based on the Kubernetes manifests.

dayton-outar commented 3 weeks ago

To modify your Dockerfile for restoring a backup file to an MSSQL database, you can follow these steps:

  1. Add the Backup File: Place your backup file (stocks.bak) in a directory that will be copied to the container.
  2. Write a Restore Script: Create a shell script (e.g., restore-db.sh) to handle the restoration.
  3. Execute the Restore Script: Update the Dockerfile to run this script after starting the MSSQL server.

Here's how you could set it up:

  1. Directory Structure:

    • scripts/restore-db.sh: Shell script for the restoration process.
    • backups/stocks.bak: Backup file to restore.
  2. Dockerfile Changes:

    FROM mcr.microsoft.com/mssql/server:2019-CU3-ubuntu-18.04
    
    USER root
    
    # Create necessary directories
    RUN mkdir -p /app /var/opt/mssql/bkup
    
    WORKDIR /app
    
    # Copy scripts and backup file to container
    COPY ./scripts /app
    COPY ./backups/stocks.bak /var/opt/mssql/bkup
    
    # Grant permissions for the restore script
    RUN chmod +x /app/restore-db.sh
    
    # Expose port 1433
    EXPOSE 1433
    
    # Start SQL Server and run the restore script
    CMD /bin/bash -c "/opt/mssql/bin/sqlservr & /app/restore-db.sh"
  3. restore-db.sh Script:

    #!/bin/bash
    
    # Wait for SQL Server to start up
    until /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "$MSSQL_SA_PASSWORD" -Q "SELECT 1" &> /dev/null
    do
       echo "Waiting for SQL Server to start..."
       sleep 5
    done
    
    # Restore the database
    echo "Restoring database from backup..."
    /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "$MSSQL_SA_PASSWORD" -Q "RESTORE DATABASE [stocks] FROM DISK = N'/var/opt/mssql/bkup/stocks.bak' WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 5"
    echo "Database restore complete."

Explanation

dayton-outar commented 3 weeks ago

To run the Docker container and expose SQL Server on localhost so you can connect with a SQL client, use the following docker run command. This will also set the required environment variables for MSSQL:

docker run -d \
  --name mssql-container \
  -e 'ACCEPT_EULA=Y' \
  -e 'MSSQL_SA_PASSWORD=YourStrong!Passw0rd' \
  -p 1433:1433 \
  your-image-name

Explanation of the Command

Connecting from a SQL Client

With this setup, you can connect to your SQL Server instance from any SQL client using:

dayton-outar commented 2 weeks ago

Couple of things ...

  1. The docker build makes duplicated effort to password: MSSQL_SA_PASSWORD and SA_PASSWORD. Can I replace MSSQL_SA_PASSWORD in restore-db.sh file and the Kubernetes manifest with SA_PASSWORD?
  2. How can I restore db without waiting for db container to come up and running restore-db.sh manually?
dayton-outar commented 2 weeks ago

To convert these Kubernetes manifests into a Helm chart, I'll create a folder structure and base files. Here’s a structure that typically works well:

Helm Chart Structure

my-application-chart/
├── Chart.yaml
├── values.yaml
└── templates/
    ├── db.yaml
    ├── harpoon.yaml
    └── reader.yaml

Step-by-Step

  1. Chart.yaml
    This file describes the Helm chart. Here's an example:

    apiVersion: v2
    name: my-application-chart
    description: A Helm chart for deploying db, harpoon, and reader services
    version: 0.1.0
  2. values.yaml
    This is where you specify the configurable values. Populate it with any fields you may want to override.

    db:
     replicas: 1
     image: my-db-image:latest
     ...
    harpoon:
     replicas: 1
     image: my-harpoon-image:latest
     ...
    reader:
     replicas: 1
     image: my-reader-image:latest
     ...
  3. templates/db.yaml
    Convert db.yaml into a Helm template. Use the {{ .Values.db.<parameter> }} syntax to replace hard-coded values with values from values.yaml.

    apiVersion: v1
    kind: Pod
    metadata:
     name: {{ .Release.Name }}-db
    spec:
     containers:
       - name: db
         image: {{ .Values.db.image }}
         ports:
           - containerPort: 5432
  4. templates/harpoon.yaml
    Similarly, convert harpoon.yaml into a template.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: {{ .Release.Name }}-harpoon
    spec:
     replicas: {{ .Values.harpoon.replicas }}
     template:
       spec:
         containers:
           - name: harpoon
             image: {{ .Values.harpoon.image }}
  5. templates/reader.yaml
    Follow the same approach for reader.yaml.

    apiVersion: v1
    kind: Service
    metadata:
     name: {{ .Release.Name }}-reader
    spec:
     ports:
       - port: 80
     selector:
       app: {{ .Release.Name }}-reader

Let me know if you'd like me to continue with specific values or any particular changes!

dayton-outar commented 2 weeks ago

To integrate this shillings manifest into the existing Helm chart, we’ll add it as another template file in the Helm chart structure. This will involve creating a new shillings.yaml file under the templates/ directory and updating values.yaml to store the configurable properties of this new deployment.

Updated Helm Chart Structure

my-application-chart/
├── Chart.yaml
├── values.yaml
└── templates/
    ├── db.yaml
    ├── harpoon.yaml
    ├── reader.yaml
    └── shillings.yaml  # New file for the shillings deployment

values.yaml

In values.yaml, add a new section for shillings:

shillings:
  replicas: 3
  image: krisyslimited/shillings:1.8
  containerPort: 80
  servicePort: 80
  imagePullSecret: docker-hub-cred

templates/shillings.yaml

Create templates/shillings.yaml, and replace the hard-coded values with template variables that reference values.yaml. Here’s how it will look:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-shillings-deployment
  labels:
    app: {{ .Release.Name }}-shillings
spec:
  replicas: {{ .Values.shillings.replicas }}
  selector:
    matchLabels:
      app: {{ .Release.Name }}-shillings
  template:
    metadata:
      labels:
        app: {{ .Release.Name }}-shillings
    spec:
      containers:
        - name: shillings
          image: {{ .Values.shillings.image }}
          ports:
            - containerPort: {{ .Values.shillings.containerPort }}
      imagePullSecrets:
        - name: {{ .Values.shillings.imagePullSecret }}
---
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}-shillings-service
spec:
  selector:
    app: {{ .Release.Name }}-shillings
  ports:
    - name: tcpweb
      protocol: TCP
      port: {{ .Values.shillings.servicePort }}
      targetPort: {{ .Values.shillings.containerPort }}

Explanation

This approach allows you to configure the shillings deployment settings through values.yaml, making the chart more flexible and reusable across different environments. Let me know if you'd like further customization or assistance with deploying this!