j3-signalroom / iac-confluent-resources-tf

This Terraform configuration leverages the IaC Confluent API Key Rotation Terraform module to create and rotate the API Keys. It then uses AWS Secrets Manager to store the current active API Key for the Schema Registry Cluster and Kafka Cluster. Plus add parameters to the AWS System Manager Parameter Store.
https://linkedin.com/in/jeffreyjonathanjennings
MIT License
0 stars 0 forks source link

`Error: No valid credential sources found` #65

Closed j3-signalroom closed 2 weeks ago

j3-signalroom commented 2 weeks ago

image

j3-signalroom commented 2 weeks ago
Current runner version: '2.319.1'
Runner name: 'J3s-MacBook-Pro'
Runner group name: 'Default'
Machine name: 'J3s-MacBook-Pro'
Testing runner upgrade compatibility
GITHUB_TOKEN Permissions
Secret source: Actions
Prepare workflow directory
Prepare all required actions
Getting action download info
Download action repository 'actions/checkout@v4' (SHA:69[2](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:1:2)973e3d937129bcbf40652eb9f2f61becf[3](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:1:3)332)
Download action repository 'aws-actions/configure-aws-credentials@v[4](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:1:4)' (SHA:e3dd6a429d7300a6a4c196c26e071d42e0343[5](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:1:5)02)
Download action repository 'hashicorp/setup-terraform@v3' (SHA:b9cd54a3c349d3f38e8881555d[6](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:1:6)16ced269862dd)
Complete job name: deploy terraform configuration
3s
Run actions/checkout@v4
Syncing repository: j3-signalroom/iac-confluent-resources-tf
Getting Git version info
Copying '/Users/jeffreyjonathanjennings/.gitconfig' to '/Users/jeffreyjonathanjennings/actions-runner/_work/_temp/[9](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:1:10)2fb5e67-39f4-43d9-bf1a-93e9cd601dc5/.gitconfig'
Temporarily overriding HOME='/Users/jeffreyjonathanjennings/actions-runner/_work/_temp/92fb5e67-39f4-43d9-bf1a-93e9cd601dc5' before making global git config changes
Adding repository directory to the temporary git global config as a safe directory
/usr/bin/git config --global --add safe.directory /Users/jeffreyjonathanjennings/actions-runner/_work/iac-confluent-resources-tf/iac-confluent-resources-tf
/usr/bin/git config --local --get remote.origin.url
https://github.com/j3-signalroom/iac-confluent-resources-tf
Removing previously created refs, to avoid conflicts
/usr/bin/git submodule status
Cleaning the repository
Disabling automatic garbage collection
Setting up auth
Fetching the repository
Determining the checkout info
/usr/bin/git sparse-checkout disable
/usr/bin/git config --local --unset-all extensions.worktreeConfig
Checking out the ref
/usr/bin/git log -1 --format='%H'
'7e3181d06e6cb2a0fd82fbdc7fdd28dee6f31605'
0s
Prepare all required actions
Run ./.github/actions/aws-environment-info
Run echo "AWS_ACCOUNT_ID=211125543747" >> $GITHUB_ENV
Run echo "AWS_ACCOUNT_ID=211125543747" >> $GITHUB_OUTPUT
1s
29s
Run hashicorp/setup-terraform@v3
/usr/bin/unzip -o -q /Users/jeffreyjonathanjennings/actions-runner/_work/_temp/fa9237e9-e41e-4a86-946e-83f65b8d69fd
2m 29s
Run terraform init
  terraform init
  shell: /bin/bash -e {0}
  env:
    AWS_ACCOUNT_ID: 211125543747
    AWS_DEFAULT_REGION: us-east-1
    AWS_REGION: us-east-1
    AWS_ACCESS_KEY_ID: ***
    AWS_SECRET_ACCESS_KEY: ***
    AWS_SESSION_TOKEN: ***
    TERRAFORM_CLI_PATH: /Users/jeffreyjonathanjennings/actions-runner/_work/_temp/ef1da591-e63f-401f-9f22-937cd96[10](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:1:11)5b3
Initializing HCP Terraform...
Initializing modules...
Downloading git::https://github.com/j3-signalroom/iac-confluent-api_key_rotation-tf_module.git for kafka_cluster_api_key_rotation...
- kafka_cluster_api_key_rotation in .terraform/modules/kafka_cluster_api_key_rotation
Downloading git::https://github.com/j3-signalroom/iac-confluent-api_key_rotation-tf_module.git for schema_registry_cluster_api_key_rotation...
- schema_registry_cluster_api_key_rotation in .terraform/modules/schema_registry_cluster_api_key_rotation
Initializing provider plugins...
- Reusing previous version of confluentinc/confluent from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/time from the dependency lock file
- Installing confluentinc/confluent v2.1.0...
- Installed confluentinc/confluent v2.1.0 (self-signed, key ID 5186AD92BC23B670)
- Installing hashicorp/aws v5.65.0...
- Installed hashicorp/aws v5.65.0 (signed by HashiCorp)
- Installing hashicorp/time v0.12.0...
- Installed hashicorp/time v0.12.0 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.

HCP Terraform has been successfully initialized!

You may now begin working with HCP Terraform. Try running "terraform plan" to
see any changes that are required for your infrastructure.

If you ever set or change modules or Terraform Settings, run "terraform init"
again to reinitialize your working directory.
51s
34s
Run terraform plan
Running plan in HCP Terraform. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/signalroom/iac-confluent-resources-workspace/runs/run-sUE8C8t8zGEiJypz

Waiting for the plan to start...
Terraform v1.9.3
on linux_amd64
Initializing plugins and modules...
data.confluent_organization.env: Refreshing...
data.confluent_organization.env: Refresh complete after 1s [id=bd545cc3-8e7c-4387-b6d8-b6d1497a9df7]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)
Terraform will perform the following actions:
  # data.confluent_schema_registry_cluster.env will be read during apply
 <= data "confluent_schema_registry_cluster" "env" {
      + api_version   = (known after apply)
      + cloud         = (known after apply)
      + display_name  = (known after apply)
      + id            = (known after apply)
      + kind          = (known after apply)
      + package       = (known after apply)
      + region        = (known after apply)
      + resource_name = (known after apply)
      + rest_endpoint = (known after apply)
      + environment {
          + id = (known after apply)
        }
    }
  # aws_secretsmanager_secret.kafka_cluster_api_key will be created
  + resource "aws_secretsmanager_secret" "kafka_cluster_api_key" {
      + arn                            = (known after apply)
      + description                    = "Kafka Cluster secrets"
      + force_overwrite_replica_secret = false
      + id                             = (known after apply)
      + name                           = "/confluent_cloud_resource/kafka_cluster/java_client"
      + name_prefix                    = (known after apply)
      + policy                         = (known after apply)
      + recovery_window_in_days        = 30
      + tags_all                       = (known after apply)
      + replica (known after apply)
    }
  # aws_secretsmanager_secret.schema_registry_cluster_api_key will be created
  + resource "aws_secretsmanager_secret" "schema_registry_cluster_api_key" {
      + arn                            = (known after apply)
      + description                    = "Schema Registry Cluster secrets"
      + force_overwrite_replica_secret = false
      + id                             = (known after apply)
      + name                           = "/confluent_cloud_resource/schema_registry_cluster/java_client"
      + name_prefix                    = (known after apply)
      + policy                         = (known after apply)
      + recovery_window_in_days        = 30
      + tags_all                       = (known after apply)
      + replica (known after apply)
    }
  # aws_secretsmanager_secret_version.kafka_cluster_api_key will be created
  + resource "aws_secretsmanager_secret_version" "kafka_cluster_api_key" {
      + arn            = (known after apply)
      + id             = (known after apply)
      + secret_id      = (known after apply)
      + secret_string  = (sensitive value)
      + version_id     = (known after apply)
      + version_stages = (known after apply)
    }
  # aws_secretsmanager_secret_version.schema_registry_cluster_api_key will be created
  + resource "aws_secretsmanager_secret_version" "schema_registry_cluster_api_key" {
      + arn            = (known after apply)
      + id             = (known after apply)
      + secret_id      = (known after apply)
      + secret_string  = (sensitive value)
      + version_id     = (known after apply)
      + version_stages = (known after apply)
    }
  # aws_ssm_parameter.consumer_kafka_client_auto_commit_interval_ms will be created
  + resource "aws_ssm_parameter" "consumer_kafka_client_auto_commit_interval_ms" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "The 'auto.commit.interval.ms' property in Apache Kafka defines the frequency (in milliseconds) at which the Kafka consumer automatically commits offsets. This is relevant when 'enable.auto.commit' is set to true, which allows Kafka to automatically commit the offsets periodically without requiring the application to do so explicitly."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/consumer_kafka_client/auto.commit.interval.ms"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.consumer_kafka_client_auto_offset_reset will be created
  + resource "aws_ssm_parameter" "consumer_kafka_client_auto_offset_reset" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "Specifies the behavior of the consumer when there is no committed position (which occurs when the group is first initialized) or when an offset is out of range. You can choose either to reset the position to the 'earliest' offset or the 'latest' offset (the default)."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/consumer_kafka_client/auto.offset.reset"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.consumer_kafka_client_basic_auth_credentials_source will be created
  + resource "aws_ssm_parameter" "consumer_kafka_client_basic_auth_credentials_source" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "This property specifies the source of the credentials for basic authentication."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/consumer_kafka_client/basic.auth.credentials.source"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.consumer_kafka_client_client_dns_lookup will be created
  + resource "aws_ssm_parameter" "consumer_kafka_client_client_dns_lookup" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "This property specifies how the client should resolve the DNS name of the Kafka brokers."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/consumer_kafka_client/client.dns.lookup"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.consumer_kafka_client_enable_auto_commit will be created
  + resource "aws_ssm_parameter" "consumer_kafka_client_enable_auto_commit" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "When set to true, the Kafka consumer automatically commits the offsets of messages it has processed at regular intervals, specified by the 'auto.commit.interval.ms' property. If set to false, the application is responsible for committing offsets manually."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/consumer_kafka_client/enable.auto.commit"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.consumer_kafka_client_max_poll_interval_ms will be created
  + resource "aws_ssm_parameter" "consumer_kafka_client_max_poll_interval_ms" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "This property defines the maximum amount of time (in milliseconds) that can pass between consecutive calls to poll() on a consumer. If this interval is exceeded, the consumer will be considered dead, and its partitions will be reassigned to other consumers in the group."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/consumer_kafka_client/max.poll.interval.ms"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.consumer_kafka_client_request_timeout_ms will be created
  + resource "aws_ssm_parameter" "consumer_kafka_client_request_timeout_ms" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "This property sets the maximum amount of time the client will wait for a response from the Kafka broker. If the server does not respond within this time, the client will consider the request as failed and handle it accordingly."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/consumer_kafka_client/request.timeout.ms"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.consumer_kafka_client_sasl_mechanism will be created
  + resource "aws_ssm_parameter" "consumer_kafka_client_sasl_mechanism" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "This property specifies the SASL mechanism to be used for authentication."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/consumer_kafka_client/sasl.mechanism"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.consumer_kafka_client_security_protocol will be created
  + resource "aws_ssm_parameter" "consumer_kafka_client_security_protocol" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "This property specifies the protocol used to communicate with Kafka brokers."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/consumer_kafka_client/security.protocol"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.consumer_kafka_client_session_timeout_ms will be created
  + resource "aws_ssm_parameter" "consumer_kafka_client_session_timeout_ms" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "This property sets the timeout for detecting consumer failures when using Kafka's group management. If the consumer does not send a heartbeat to the broker within this period, it will be considered dead, and its partitions will be reassigned to other consumers in the group."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/consumer_kafka_client/session.timeout.ms"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.producer_kafka_client_acks will be created
  + resource "aws_ssm_parameter" "producer_kafka_client_acks" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "This property specifies the number of acknowledgments the producer requires the leader to have received before considering a request complete."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/producer_kafka_client/acks"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.producer_kafka_client_client_dns_lookup will be created
  + resource "aws_ssm_parameter" "producer_kafka_client_client_dns_lookup" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "This property specifies how the client should resolve the DNS name of the Kafka brokers."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/producer_kafka_client/client.dns.lookup"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.producer_kafka_client_sasl_mechanism will be created
  + resource "aws_ssm_parameter" "producer_kafka_client_sasl_mechanism" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "This property specifies the SASL mechanism to be used for authentication."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/producer_kafka_client/sasl.mechanism"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # aws_ssm_parameter.producer_kafka_client_security_protocol will be created
  + resource "aws_ssm_parameter" "producer_kafka_client_security_protocol" {
      + arn            = (known after apply)
      + data_type      = (known after apply)
      + description    = "This property specifies the protocol used to communicate with Kafka brokers."
      + id             = (known after apply)
      + insecure_value = (known after apply)
      + key_id         = (known after apply)
      + name           = "/confluent_cloud_resource/producer_kafka_client/security.protocol"
      + tags_all       = (known after apply)
      + tier           = (known after apply)
      + type           = "String"
      + value          = (sensitive value)
      + version        = (known after apply)
    }
  # confluent_environment.env will be created
  + resource "confluent_environment" "env" {
      + display_name  = "dev"
      + id            = (known after apply)
      + resource_name = (known after apply)
      + stream_governance {
          + package = "ESSENTIALS"
        }
    }
  # confluent_kafka_cluster.kafka_cluster will be created
  + resource "confluent_kafka_cluster" "kafka_cluster" {
      + api_version        = (known after apply)
      + availability       = "SINGLE_ZONE"
      + bootstrap_endpoint = (known after apply)
      + cloud              = "AWS"
      + display_name       = "kafka_cluster"
      + id                 = (known after apply)
      + kind               = (known after apply)
      + rbac_crn           = (known after apply)
      + region             = "us-east-1"
      + rest_endpoint      = (known after apply)
      + basic {}
      + byok_key (known after apply)
      + environment {
          + id = (known after apply)
        }
      + network (known after apply)
    }
  # confluent_role_binding.kafka_cluster_api_environment_admin will be created
  + resource "confluent_role_binding" "kafka_cluster_api_environment_admin" {
      + crn_pattern = (known after apply)
      + id          = (known after apply)
      + principal   = (known after apply)
      + role_name   = "EnvironmentAdmin"
    }
  # confluent_service_account.kafka_cluster_api will be created
  + resource "confluent_service_account" "kafka_cluster_api" {
      + api_version  = (known after apply)
      + description  = "Kafka Cluster API Service Account"
      + display_name = "dev-kafka_cluster-api"
      + id           = (known after apply)
      + kind         = (known after apply)
    }
  # confluent_service_account.schema_registry_cluster_api will be created
  + resource "confluent_service_account" "schema_registry_cluster_api" {
      + api_version  = (known after apply)
      + description  = "Environment API Service Account"
      + display_name = "dev-environment-api"
      + id           = (known after apply)
      + kind         = (known after apply)
    }
  # module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0] will be created
  + resource "confluent_api_key" "resouce_api_key" {
      + description            = "Creation of the Confluent Resource API Key managed by Terraform Cloud using Confluent API Key Rotation Module"
      + disable_wait_for_ready = false
      + display_name           = (known after apply)
      + id                     = (known after apply)
      + secret                 = (sensitive value)
      + managed_resource {
          + api_version = (known after apply)
          + id          = (known after apply)
          + kind        = (known after apply)
          + environment {
              + id = (known after apply)
            }
        }
      + owner {
          + api_version = (known after apply)
          + id          = (known after apply)
          + kind        = (known after apply)
        }
    }
  # module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1] will be created
  + resource "confluent_api_key" "resouce_api_key" {
      + description            = "Creation of the Confluent Resource API Key managed by Terraform Cloud using Confluent API Key Rotation Module"
      + disable_wait_for_ready = false
      + display_name           = (known after apply)
      + id                     = (known after apply)
      + secret                 = (sensitive value)
      + managed_resource {
          + api_version = (known after apply)
          + id          = (known after apply)
          + kind        = (known after apply)
          + environment {
              + id = (known after apply)
            }
        }
      + owner {
          + api_version = (known after apply)
          + id          = (known after apply)
          + kind        = (known after apply)
        }
    }
  # module.kafka_cluster_api_key_rotation.time_rotating.api_key_rotations[0] will be created
  + resource "time_rotating" "api_key_rotations" {
      + day              = (known after apply)
      + hour             = (known after apply)
      + id               = (known after apply)
      + minute           = (known after apply)
      + month            = (known after apply)
      + rfc3339          = (known after apply)
      + rotation_days    = 60
      + rotation_rfc3339 = (known after apply)
      + second           = (known after apply)
      + unix             = (known after apply)
      + year             = (known after apply)
    }
  # module.kafka_cluster_api_key_rotation.time_rotating.api_key_rotations[1] will be created
  + resource "time_rotating" "api_key_rotations" {
      + day              = (known after apply)
      + hour             = (known after apply)
      + id               = (known after apply)
      + minute           = (known after apply)
      + month            = (known after apply)
      + rfc3339          = (known after apply)
      + rotation_days    = 60
      + rotation_rfc3339 = (known after apply)
      + second           = (known after apply)
      + unix             = (known after apply)
      + year             = (known after apply)
    }
  # module.kafka_cluster_api_key_rotation.time_static.api_key_rotations[0] will be created
  + resource "time_static" "api_key_rotations" {
      + day     = (known after apply)
      + hour    = (known after apply)
      + id      = (known after apply)
      + minute  = (known after apply)
      + month   = (known after apply)
      + rfc3339 = (known after apply)
      + second  = (known after apply)
      + unix    = (known after apply)
      + year    = (known after apply)
    }
  # module.kafka_cluster_api_key_rotation.time_static.api_key_rotations[1] will be created
  + resource "time_static" "api_key_rotations" {
      + day     = (known after apply)
      + hour    = (known after apply)
      + id      = (known after apply)
      + minute  = (known after apply)
      + month   = (known after apply)
      + rfc3339 = (known after apply)
      + second  = (known after apply)
      + unix    = (known after apply)
      + year    = (known after apply)
    }
  # module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0] will be created
  + resource "confluent_api_key" "resouce_api_key" {
      + description            = "Creation of the Confluent Resource API Key managed by Terraform Cloud using Confluent API Key Rotation Module"
      + disable_wait_for_ready = false
      + display_name           = (known after apply)
      + id                     = (known after apply)
      + secret                 = (sensitive value)
      + managed_resource {
          + api_version = (known after apply)
          + id          = (known after apply)
          + kind        = (known after apply)
          + environment {
              + id = (known after apply)
            }
        }
      + owner {
          + api_version = (known after apply)
          + id          = (known after apply)
          + kind        = (known after apply)
        }
    }
  # module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1] will be created
  + resource "confluent_api_key" "resouce_api_key" {
      + description            = "Creation of the Confluent Resource API Key managed by Terraform Cloud using Confluent API Key Rotation Module"
      + disable_wait_for_ready = false
      + display_name           = (known after apply)
      + id                     = (known after apply)
      + secret                 = (sensitive value)
      + managed_resource {
          + api_version = (known after apply)
          + id          = (known after apply)
          + kind        = (known after apply)
          + environment {
              + id = (known after apply)
            }
        }
      + owner {
          + api_version = (known after apply)
          + id          = (known after apply)
          + kind        = (known after apply)
        }
    }
  # module.schema_registry_cluster_api_key_rotation.time_rotating.api_key_rotations[0] will be created
  + resource "time_rotating" "api_key_rotations" {
      + day              = (known after apply)
3m 9s
aws_secretsmanager_secret.schema_registry_cluster_api_key: Creation complete after 0s [id=arn:aws:secretsmanager:us-east-1:2[11](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:1:12)125543747:secret:/confluent_cloud_resource/schema_registry_cluster/java_client-aFHn1Y]
aws_ssm_parameter.producer_kafka_client_acks: Creation complete after 0s [id=/confluent_cloud_resource/producer_kafka_client/acks]
confluent_environment.env: Creation complete after 0s [id=env-ox0onj]
confluent_kafka_cluster.kafka_cluster: Creating...
aws_ssm_parameter.consumer_kafka_client_session_timeout_ms: Creation complete after 0s [id=/confluent_cloud_resource/consumer_kafka_client/session.timeout.ms]
confluent_service_account.schema_registry_cluster_api: Creation complete after 1s [id=sa-891km0]
confluent_service_account.kafka_cluster_api: Creation complete after 1s [id=sa-01xwjp]
confluent_role_binding.kafka_cluster_api_environment_admin: Creating...
confluent_kafka_cluster.kafka_cluster: Still creating... [10s elapsed]
confluent_role_binding.kafka_cluster_api_environment_admin: Still creating... [10s elapsed]
confluent_kafka_cluster.kafka_cluster: Creation complete after [12](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:1:13)s [id=lkc-yk30m7]
data.confluent_schema_registry_cluster.env: Refreshing...
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Creating...
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Creating...
data.confluent_schema_registry_cluster.env: Refresh complete after 1s [id=lsrc-wk6r59]
module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Creating...
module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Creating...
confluent_role_binding.kafka_cluster_api_environment_admin: Still creating... [20s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [10s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [10s elapsed]
module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [10s elapsed]
module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [10s elapsed]
confluent_role_binding.kafka_cluster_api_environment_admin: Still creating... [30s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [20s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [20s elapsed]
module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [20s elapsed]
module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [20s elapsed]
confluent_role_binding.kafka_cluster_api_environment_admin: Still creating... [40s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [30s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [30s elapsed]
module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [30s elapsed]
module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [30s elapsed]
module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Creation complete after 31s [id=QROW4QV7NQE3YATM]
module.schema_registry_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Creation complete after 31s [id=A55U5GKKSFOVVHF5]
aws_secretsmanager_secret_version.schema_registry_cluster_api_key: Creating...
aws_secretsmanager_secret_version.schema_registry_cluster_api_key: Creation complete after 0s [id=arn:aws:secretsmanager:us-east-1:211125543747:secret:/confluent_cloud_resource/schema_registry_cluster/java_client-aFHn1Y|terraform-20240902000552807800000003]
confluent_role_binding.kafka_cluster_api_environment_admin: Still creating... [50s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [40s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [40s elapsed]
confluent_role_binding.kafka_cluster_api_environment_admin: Still creating... [1m0s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [50s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [50s elapsed]
confluent_role_binding.kafka_cluster_api_environment_admin: Still creating... [1m10s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [1m0s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [1m0s elapsed]
confluent_role_binding.kafka_cluster_api_environment_admin: Still creating... [1m20s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [1m10s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [1m10s elapsed]
confluent_role_binding.kafka_cluster_api_environment_admin: Still creating... [1m30s elapsed]
confluent_role_binding.kafka_cluster_api_environment_admin: Creation complete after 1m30s [id=rb-zkm4WN]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [1m20s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [1m20s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [1m30s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [1m30s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [1m40s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [1m40s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [1m50s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [1m50s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Still creating... [2m0s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Still creating... [2m0s elapsed]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[0]: Creation complete after 2m2s [id=3YEID2CEKLSTZTPF]
module.kafka_cluster_api_key_rotation.confluent_api_key.resouce_api_key[1]: Creation complete after 2m2s [id=2ERWZNZHU54XFXND]
aws_secretsmanager_secret_version.kafka_cluster_api_key: Creating...
aws_secretsmanager_secret_version.kafka_cluster_api_key: Creation complete after 0s [id=arn:aws:secretsmanager:us-east-1:211125543747:secret:/confluent_cloud_resource/kafka_cluster/java_client-TEfHMZ|terraform-20240902000722903700000004]
Apply complete! Resources: 35 added, 0 changed, 0 destroyed.
0s
Post job cleanup.
1s
Post job cleanup.
/usr/bin/git version
git version 2.39.5 (Apple Git-[15](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:1:16)4)
Copying '/Users/jeffreyjonathanjennings/.gitconfig' to '/Users/jeffreyjonathanjennings/actions-runner/_work/_temp/e24e6469-c7b8-4af5-a18b-22[16](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:1:17)bb5899cf/.gitconfig'
Temporarily overriding HOME='/Users/jeffreyjonathanjennings/actions-runner/_work/_temp/e24e6469-c7b8-4af5-a[1](https://github.com/j3-signalroom/iac-confluent-resources-tf/actions/runs/10658391272/job/29539355389#step:2:1)8b-2216bb5899cf' before making global git config changes
Adding repository directory to the temporary git global config as a safe directory
/usr/bin/git config --global --add safe.directory /Users/jeffreyjonathanjennings/actions-runner/_work/iac-confluent-resources-tf/iac-confluent-resources-tf
/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :"
/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader
http.https://github.com/.extraheader
/usr/bin/git config --local --unset-all http.https://github.com/.extraheader
/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :"
0s
Cleaning up orphan processes