bcgov / cas-pipeline

A collection of make functions used to compose pipelines
Apache License 2.0
0 stars 0 forks source link

Add directions and script for migration off of Terraform Cloud #83

Closed joshgamache closed 6 months ago

joshgamache commented 8 months ago

I want move the migration script used in cas-cif to cas-pipeline along with it's directions because we will be reusing the same pattern and script on other repos and this will help me to remove our reliance on Terraform Cloud.

Acceptance criteria:

Consideration and notes

If the time allows, it would be better to have a make file to preform all (or as many) of the steps required. But only if the time spent in that is less than just following the directions for each subsequent repo.

joshgamache commented 8 months ago

Here are the directions that we used for the migration in cas-cif. These may need to be slightly rewritten to accomodate usage outside that.

Directions for migration away from Terraform Cloud to GCS backends

    1. Navigate to the Helm chart's directory for the project (e.g. /chart/cas-cif), then to the terraform directory within that.
  1. Acquire terraform@ggl-cas-storage.iam.gserviceaccount.com credentials from the CAS 1Password (named cas-pipeline/gcp-tf-credentials.json) and place them in the Helm chart's Terraform directory (e.g. `/chart/cas-cif/terraform/credentials.json). WARNING: Do not commit these to Git or Helm!
  2. Make a copy of migration_example.tfvars named local.tfvars. Fill in the values for the various keys based on the project you are in. (See 1Password item named Migration local.tfvars base for reusable values)
  3. Log in to OpenShift through the GUI and then login through the CLI (keep this login tab open). Note: Your API token (used in step 6) changes every time you log in, you will need to copy this each time.
  4. Ensure you are in the namespace matching the project you want to work with using the command oc project {NAMESPACE_WITH_ENVIRNMENT} (e.g. oc project c1234-dev).
  5. Copy your your API token from the login through the CLI tab that you kept open and paste it into your local.tfvars file in the kubernetes_token key.
  6. Get the Terraform Backend from the OpenShift secret gcp-credentials-secret.tf_backend with oc get secret gcp-credentials-secret -o go-template --template="{{.data.tf_backend|base64decode}}" > gcp-dev.tfbackend. Change the target file name (e.g. gcp-dev.tfbackend) depending on the environment (dev, test, prod). NOTE: Ensure that the bucket value from this file matches your intended namespace!
  7. Open gcp-dev.tfbackend in your code editor and change the key credentials to the value credentials.json. This is the path of the credentials file from step 2.
  8. Initiate Terraform state with terraform init -backend-config=gcp-dev.tfbackend.
  9. Run terraform plan -var-file=local.tfvars to ensure the state was created. This command is expected to want to create a number of new items.
  10. Create the temp-state directory within your current directory where Terraform is being run (e.g. /chart/cas-cif/terraform => /chart/cas-cif/terraform/temp-state).
  11. Backup GCS remote state to local with terraform state pull > ./temp-state/local.tfstate.
  12. Acquire tfcloud.tfstate and place it in the temp-state directory. This can be acquired via cloud.terraform.com or via a CLI command (TODO: Figure out this command).
  13. Ensure resources to migrate are mapped out in ./tf-migrate.sh. (See [[Shadowing with Josh L#Further notes for tf-migrate.sh script]] below).
  14. Run the migration script ./tf-migration.sh.
  15. Push the local state to GCS terraform state push "./temp-state/local.tfstate".
  16. Check that state pushed properly with terraform state list (should expect a list of resources) and then run terraform plan -var-file=local.tfvars.

Getting state from Terraform Cloud

Via app gui

  1. Log in to app.terraform.io
  2. Click into the cas-silver workspace