Open snifbr opened 7 months ago
I have interest in submit a Pull Request with a solution propose.
Hi,
Thanks for bringing this to our attention. Can you please also share the error message so I can replicate the behaviour?
Thanks.
Hi,
Thanks for bringing this to our attention. Can you please also share the error message so I can replicate the behaviour?
Thanks.
Hi @hyder , below is the error I'm running into:
module.iam.oci_identity_policy.cluster[0]: Still creating... [2m0s elapsed]
ā·
ā Error: 409-PolicyAlreadyExists, Policy 'oke-cluster-glqyop' already exists
ā Suggestion: The resource is in a conflicted state. Please retry again or contact support for help with service: Identity Policy
ā Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/identity_policy
ā API Reference: https://docs.oracle.com/iaas/api/#/en/identity/20160918/Policy/CreatePolicy
ā Request Target: POST https://identity.sa-saopaulo-1.oci.oraclecloud.com/20160918/policies
ā Provider version: 5.28.0, released on 2024-02-07.
ā Service: Identity Policy
ā Operation Name: CreatePolicy
ā OPC request ID: 0681765e82b93cac5b6257f28f98ceec/C7D62792359AB72A8387775A651630FD/E431421C6F470A6AFE56E3A56421E5B3
ā
ā
ā with module.iam.oci_identity_policy.cluster[0],
ā on modules/iam/policy.tf line 20, in resource "oci_identity_policy" "cluster":
ā 20: resource "oci_identity_policy" "cluster" {
ā
āµ
@robo-cap
Community Note
Terraform Version and Provider Version
terraform 1.5.7 and provider oracle/oci 5.27.0
Affected Resource(s)
module.iam.oci_identity_policy.cluster[0]
Terraform Configuration Files
provider "oci" { config_file_profile = "sensedia" region = "sa-saopaulo-1" }
provider "oci" { alias = "home" config_file_profile = "sensedia" region = "sa-saopaulo-1" }
module "oke" {
general oci parameters
tenancy_id = local.tenancy_ocid compartment_id = local.compartment_ocid
Identity
create_iam_resources = true create_iam_kms_policy = "always" cluster_kms_key_id = local.cluster_kms_key_id
Network
create_vcn = true vcn_cidrs = local.cidrs vcn_create_internet_gateway = "always" vcn_create_nat_gateway = "always" vcn_create_service_gateway = "always" vcn_name = local.shard vcn_dnslabel = replace(local.shard, "/[-]/", "") assign_dns = true
subnets = { cp = { create = "always" newbits = 9 }
}
Network Security
nsgs = { cp = { create = "always" } int_lb = { create = "always" } pub_lb = { create = "always" } workers = { create = "always" } pods = { create = "always" } }
allow_node_port_access = false allow_pod_internet_access = true allow_rules_internal_lb = {} allow_rules_public_lb = {} allow_worker_internet_access = true allow_worker_ssh_access = true enable_waf = false bastion_allowed_cidrs = [] control_plane_allowed_cidrs = ["0.0.0.0/0"] control_plane_is_public = true load_balancers = "both" #can be: public, internal, both worker_is_public = false
Bastion
create_bastion = false
Cluster
create_cluster = false preferred_load_balancer = "internal" #depends on: load_balancers value. create_operator = false }
Debug Output
Panic Output
Expected Behavior
When you have our own Vault with KMS Key to use in terraform-oci-oke module, you add values in variables
create_iam_resources = true
,create_iam_kms_policy = "always"
,cluster_kms_key_id = var.cluster_kms_key_id
andworker_volume_kms_key_id = var.worker_volume_kms_key_id
, in this way you'll use our own KMS Key in your OKE cluster.Actual Behavior
When a already existed Vault with our own KMS Key and a VCN created using the same terraform oke module (only-network-mode) and IAM resources for KMS. The terraform oke module (only-cluster-worker-mode) for a OKE cluster with worker nodes conflict the name of the policy for IAM resources.
If you put together network/cluster/worker you'll receive a error in creation of cluster OKE, because there won't be policy for cluster OKE to permit use our own KMS Key, because the chain of dependencies of resources and variables put the creation of
module.iam.oci_identity_policy.cluster[0]
aftermodule.cluster[0].oci_containerengine_cluster.k8s_cluster
, but the creation of cluster depends on permissions to use KMS Key.Steps to Reproduce
Enable use of your own KMS Key:
Run terraform:
Important Factoids
I already tried apply terraform-oci-oke module put together network/cluster/worker and split in two pieces network and cluster/worker, but root cause of the error involving
module.iam.oci_identity_policy.cluster[0]
persist.References