cloud-native-toolkit / terraform-gitops-ibm-portworx

Module to populate a gitops repository with the resources required to provision Portworx in an OpenShift cluster
Apache License 2.0
0 stars 0 forks source link

IBM Portworx gitops

Module to populate a gitops repository with the resources necessary to provision Portworx in an IBM Cloud environment. The Portworx install is complex with many moving parts. Here's what gets installed:

  1. ConfigMap/portworx-ibm-portworx - Contains the scripts that provide all the logic for the jobs and daemonset
  2. ServiceAccount/portworx-ibm-portworx - The service account the DaemonSet and Job will run under
  3. ClusterRole/portworx-ibm-portworx and ClusterRoleBinding/portworx-ibm-portworx - Provides the necessary access for the portworx-ibm-portworx ServiceAccount across the cluster
  4. Role/portworx-ibm-portworx and RoleBinding/portworx-ibm-portworx - Provides the necessary access for the portworx-ibm-portworx ServiceAccount in the namespace where the helm chart is deployed
  5. Job/portworx-ibm-portworx-job - Extracts the cluster id from the providerID on the node and writes the information to a secret
  6. DaemonSet/ibm-portworx - Provisions a volume for each node in the cluster and attaches the volume to the node. The volume name is based on the node name and if the volume already exists then it is attached to the node. If the volume is already attached to the node then the daemonset does nothing. When any daemonset pod is deleted it should remove the volume attachment from the node.
  7. ConfigMap/ibmcloud-operator-defaults - Provides the region and resource group id where the portworx service should be provisioned on IBM Cloud
  8. SealedSecret/ibmcloud-operator-secret - Provides the API key used to provision the portworx service
  9. Service.IBMCloud/portworx - CR for the IBM Cloud Operator to provision an instance of the Portworx service on IBM Cloud. After the Operator has applied the CR, a Portworx service instance should be visible on the IBM Cloud console and the Portworx helm chart should be deployed in the kube-system namespace
  10. Job/portworx-ibm-portworx-delete - Cleans up the Portworx helm chart from the kube-system namespace when the service is destroyed (the service does not clean itself up properly)

Software dependencies

The module depends on the following software components:

Command-line tools

Terraform providers

Module dependencies

This module makes use of the output from other modules:

Example usage

module "portworx" {
  source = "github.com/cloud-native-toolkit/terraform-gitops-ibm-portworx.git"

   gitops_config = module.gitops.gitops_config
   git_credentials = module.gitops.git_credentials
   server_name = module.gitops.server_name
   namespace = module.gitops_namespace.name
   kubeseal_cert = module.gitops.sealed_secrets_cert
   resource_group_id = module.resource_group.id
   ibmcloud_api_key = var.ibmcloud_api_key
}

Anatomy of the GitOps module repository

An automation modules is created from a template repository that includes a skeleton of the module logic and the automation framework to validate and release the module.

Module logic

The module follows the naming convention of terraform modules:

Module automation

The automation modules rely heavily on GitHub Actions automatically validate changes to the module and release new versions. The GitHub Action workflows are found in .github/workflows. There are three workflows provided by default:

Verify and release module (verify.yaml)

This workflow runs for pull requests against the main branch and when changes are pushed to the main branch.

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

The verify job checks out the module and deploys the terraform template in the test/stages folder. (More on the details of this folder in a later section.) It applies the testcase(s) listed in the strategy.matrix.testcase variable against the terraform template to validate the module logic. It then runs the .github/scripts/validate-deploy.sh to verify that everything was deployed successfully. Note: This script should be customized to validate the resources provisioned by the module. After the deploy is completed, the destroy logic is also applied to validate the destroy logic and to clean up after the test. The parameters for the test case are defined in https://github.com/cloud-native-toolkit/action-module-verify/tree/main/env. New test cases can be added via pull request.

The verifyMetadata job checks out the module and validates the module metadata against the module metadata schema to ensure the structure is valid.

The release job creates a new release of the module. The job only runs if the verify and verifyMetadata jobs completed successfully AND if the workflow was started from a push to the main branch (i.e. not a change to a pull request). The job uses the release-drafter/release-drafter GitHub Action to create the release based on the configuration in .github/release-drafter.yaml. The configuration looks for labels on the pull request to determine the type of change for the release changelog (enhancement, bug, chore) and which portion of the version number to increment (major, minor, patch).

Publish assets (publish-assets.yaml)

This workflow runs when a new release is published (either manually or via an automated process).

on:
  release:
    types:
      - published

When a release is created, the module is checked out and the metadata is built and validated. If the metadata is checks out then it is published to the gh-pages branch as index.yaml

Notify (notify.yaml)

This workflow runs when a new release is published (either manually or via an automated process).

on:
  release:
    types:
      - published

When a release is created, a repository dispatch is sent out to the repositories listed in the strategy.matrix.repo variable. By default, the automation-modules and ibm-garage-iteration-zero repositories are notified. When those modules receive the notification, an automation workflow is triggered on their end to deal with the newly available module version.

Module metadata

The module metadata adds extra descriptive information about the module that is used to build out the module catalog.

name: ""
type: gitops
description: ""
tags:
  - tools
  - gitops
versions:
  - platforms:
      - kubernetes
      - ocp3
      - ocp4
    dependencies:
      - id: gitops
        refs:
          - source: github.com/cloud-native-toolkit/terraform-tools-gitops.git
            version: ">= 1.1.0"
      - id: namespace
        refs:
          - source: github.com/cloud-native-toolkit/terraform-gitops-namespace.git
            version: ">= 1.0.0"
    variables:
      - name: gitops_config
        moduleRef:
          id: gitops
          output: gitops_config
      - name: git_credentials
        moduleRef:
          id: gitops
          output: git_credentials
      - name: server_name
        moduleRef:
          id: gitops
          output: server_name
      - name: namespace
        moduleRef:
          id: namespace
          output: name
      - name: kubeseal_cert
        moduleRef:
          id: gitops
          output: sealed_secrets_cert

Note: For most all GitOps modules, the initial dependencies and variable mappings should be preserved. Additional dependencies and variable definitions can be added as needed.

Note: As a design point, the gitops module should ideally not have a direct dependency on the cluster and should instead depend (exclusively) on the gitops repository. That way, the cluster itself might be inaccessible by the automation process but the software can still be installed in the cluster so long as the gitops repository is accessible.

Module test logic

The test/stages folder contains the terraform template needed to execute the module. By convention, each module is defined in its own file. Also by convention, all prereqs or dependencies for the module are named stage1-xxx and the module to be tested is named stage2-xxx. The default test templates in the GitOps repo are set up to provision a GitOps repository, log into a cluster, provision ArgoCD in the cluster and bootstrap it with the GitOps repository, provision a namespace via GitOps where the module will be deployed then apply the module logic. The end result of this test terraform template should be a cluster that has been provisioned with the components of the module via the GitOps repository.

This test logic will run every time a change is made to the repository to ensure there are no regressions to the module.

GitOps repository structure

The GitOps modules assume the repository has been divided into three different layers to separate the different types of resources that will be provisioned in the cluster:

  1. infrastucture - the infrastructure layer contains cluster-wide and/or privileged resources like namespaces, rbac configuration, service accounts, and security context constraints. Most modules won't directly use this layer but may use submodules to configure service accounts and rbac that will be put in this layer.
  2. services - the services layer contains shared middleware and software services that may be used by multiple applications deployed within the cluster. This includes things like databases, service mesh, api management software, or multi-tenanted development tools. Most components will be placed in this layer.
  3. application - the application layer is where the gitops configuration to deploy applications that make use of the shared services is placed. Often this configuration will be applied to the GitOps repo as part of a CI/CD process to manage the application lifecycle.

Within the layers, there are three different types that can be applied:

  1. operator - operator deployments are organized in a particular way in the gitops repository
  2. instances - instances created from custom resources applied via an operator are organized in a different manner in the gitops repository
  3. base - basically everything that is not an operator or operator instance deployment falls in this category

In order to simplify the process of managing the gitops repository structure and the different configuration options, a command has been provided in the igc cli to populate the gitops repo - igc gitops-module. The layer and type are provided as arguments to the command as well as the directory where the yaml for the module is located and the details about the gitops repo.

The yaml used to define the resources required to deploy the component can be defined as kustomize scripts, a helm chart, or as raw yaml in the directory. In most cases we use helm charts to simplify the required input configuration.

Submitting changes

  1. Fork the module git repository into your personal org
  2. In your forked repository, add the following secrets (note: if you are working in the repo in the Cloud Native Toolkit, these secrets are already available):
    • IBMCLOUD_API_KEY - an API Key to an IBM Cloud account where you can provision the test instances of any resources you need
    • GIT_ADMIN_USERNAME - the username of a git user with permission to create repositories
    • GIT_ADMIN_TOKEN - the personal access token of a git user with permission to create repositories in the target git org
    • __GIT_ORG__ - the git org where test GitOps repos will be provisioned
  3. Create a branch in the forked repository where you will do your work
  4. Create a draft pull request in the Cloud Native Toolkit repository for your branch as soon as you push your first change. Add labels to the pull request for the type of change (enhancement, bug, chore) and the type of release (major, minor, patch) to impact the generated release documentation.
  5. When the changes are completed and the automated checks are running successfully, mark the pull request as "Ready to review".
  6. The module changes will be reviewed and the pull request merged. After the changes are merged, the automation in the repo create a new release of the module.

Development

Adding logic and updating the test

  1. Start by implementing the logic in main.tf, adding required variables to variables.tf as necessary.
  2. Update the test/stages/stage2-xxx.tf file with any of the required variables.
  3. If the module has dependencies on other modules, add them as test/stages/stage1-xxx.tf and reference the output variables as variable inputs.
  4. Review the validation logic in .github/scripts/validate-deploy.sh and update as appropriate.
  5. Push the changes to the remote branch and review the check(s) on the pull request. If the checks fail, review the log and make the necessary adjustments.