hashicorp / tfc-workflows-github

HCP Terraform starter workflows and github actions to automate Terraform Cloud CI/CD pipelines.
Mozilla Public License 2.0
124 stars 20 forks source link

feature: few thoughts for other functionalities #5

Open juicybaba opened 1 year ago

juicybaba commented 1 year ago

Glad there is an official release for Terraform GitHub action, we've been doing this thing internally and it is a pain to create the wheel...

Few thoughts based on my use case,

  1. Is it possible to support terraform enterprise? maybe just by allowing customers to provide the host? I believe the URL (path section) schema for TFC and TFE should be the same, it is just the host is different.
  2. Is it possible to make directory optional? Terraform workspace should be able to know where the working directory is. This will also be very helpful when multiple workspaces (source code in different repos) need to be triggered in a certain order and monitor the progress from a central place. e.g. repo1 is the metadata for the infra, and few other repos are the consumer of the 1st repo, which need to be run (without code change) when there is an update of the 1st repo.
  3. This might be too much, but return a list of workspaces based on tags/keyword/commit sha can be interesting. e.g. trigger a run for all workspaces with production tag or project1 in name.
srlynch1 commented 1 year ago
  1. this one is this possible today.

hostname: required: false description: "The hostname of a Terraform Enterprise installation, if using Terraform Enterprise. Defaults to Terraform Cloud (app.terraform.io) if TF_HOSTNAME environment variable is not set." default: "" See https://github.com/hashicorp/tfc-workflows-github/blob/main/actions/create-run/action.yml

  1. The working directory is for the repository and the relative path to your tf configuration, not the workspace, not sure I understand the ask exactly, but you can use a reusable workflow, you can set the working directory in an environment var on each repo using the external reusable workflow without requiring code change.

  2. This one is possible today if you can build out a matrix workflow, you will need to populate to build the query yourself though, as the actions are targeted at terraform workflow, you will need to build out your query to produce an array then pass to the matrix strategy job. Example in Python to build your array based on tag query. Example of using these actions with matrix strategy using a dynamic array is here: https://github.com/hashi-demo-lab/admin-workspace-onboarding-large-scale/blob/main/.github/workflows/workspace-onboarding.yml

import os
import requests

# Set your Terraform Cloud API token
api_token = os.environ.get("TFE_TOKEN")

# Set the organization name
organization = "hashi-demos-apj"

# Get the "nsx" portion of the search query from an environment variable
query = os.environ.get("TFE_TAG_QUERY")

# Construct the API endpoint URL
url = f"https://app.terraform.io/api/v2/organizations/{organization}/workspaces?search[tags]={query}"

# Set the request headers
headers = {
    "Authorization": f"Bearer {api_token}",
    "Content-Type": "application/vnd.api+json",
}

# Send the API request
response = requests.get(url, headers=headers)

# Check if the request was successful
if response.status_code == 200:
    data = response.json()

    # Extract the workspaces from the response
    workspaces = data.get("data", [])

    # Create an array to store the workspace names
    workspace_names = []

    # Iterate over the workspaces
    for workspace in workspaces:
        # Extract the workspace name
        workspace_name = workspace.get("attributes", {}).get("name")
        if workspace_name:
            workspace_names.append(workspace_name)

    # Print the workspace names
    print(workspace_names)
else:
    print(f"Failed to retrieve workspaces. Status code: {response.status_code}")
juicybaba commented 1 year ago
  1. this one is this possible today.

hostname: required: false description: "The hostname of a Terraform Enterprise installation, if using Terraform Enterprise. Defaults to Terraform Cloud (app.terraform.io) if TF_HOSTNAME environment variable is not set." default: "" See https://github.com/hashicorp/tfc-workflows-github/blob/main/actions/create-run/action.yml

  1. The working directory is for the repository and the relative path to your tf configuration, not the workspace, not sure I understand the ask exactly, but you can use a reusable workflow, you can set the working directory in an environment var on each repo using the external reusable workflow without requiring code change.
  2. This one is possible today if you can build out a matrix workflow, you will need to populate to build the query yourself though, as the actions are targeted at terraform workflow, you will need to build out your query to produce an array then pass to the matrix strategy job. Example in Python to build your array based on tag query. Example of using these actions with matrix strategy using a dynamic array is here: https://github.com/hashi-demo-lab/admin-workspace-onboarding-large-scale/blob/main/.github/workflows/workspace-onboarding.yml
import os
import requests

# Set your Terraform Cloud API token
api_token = os.environ.get("TFE_TOKEN")

# Set the organization name
organization = "hashi-demos-apj"

# Get the "nsx" portion of the search query from an environment variable
query = os.environ.get("TFE_TAG_QUERY")

# Construct the API endpoint URL
url = f"https://app.terraform.io/api/v2/organizations/{organization}/workspaces?search[tags]={query}"

# Set the request headers
headers = {
    "Authorization": f"Bearer {api_token}",
    "Content-Type": "application/vnd.api+json",
}

# Send the API request
response = requests.get(url, headers=headers)

# Check if the request was successful
if response.status_code == 200:
    data = response.json()

    # Extract the workspaces from the response
    workspaces = data.get("data", [])

    # Create an array to store the workspace names
    workspace_names = []

    # Iterate over the workspaces
    for workspace in workspaces:
        # Extract the workspace name
        workspace_name = workspace.get("attributes", {}).get("name")
        if workspace_name:
            workspace_names.append(workspace_name)

    # Print the workspace names
    print(workspace_names)
else:
    print(f"Failed to retrieve workspaces. Status code: {response.status_code}")

@srlynch1 thank you so much for your reply, I am able to do #1 and #2 now, I didn't read the sample carefully and I was confused with the configuration action and the directory.

One more thing, the create-run will fail if the target workspace is locked(e.g. the previous run didn't apply or there is any plan running), any plan to add a flag to do actions/force-execute? Because when PR is merged a run will automatically be triggered, we either force-execute a new run and work with it, or find the auto-triggered run (by commit-sha?), follow the plan status, and apply after.