oracle-quickstart / oke-soa

Universal Permissive License v1.0
4 stars 6 forks source link

Deploy OKE - INSTALLATION FAILED: create: failed to create: namespaces "opns" not found #9

Closed Michalski-Piotr closed 2 years ago

Michalski-Piotr commented 2 years ago

Hello, now I hit another error:

ull_resource.deploy_traefik[0] (local-exec): Error: INSTALLATION FAILED: create: failed to create: namespaces "traefik" not found
null_resource.deploy_traefik[0] (local-exec): Traefik is installed and running
null_resource.deploy_traefik[0]: Creation complete after 12s [id=3335026410091670222]
null_resource.deploy_wls_operator[0] (local-exec): Error: INSTALLATION FAILED: create: failed to create: namespaces "opns" not found
╷
│ Error: local-exec provisioner error
│
│   with null_resource.deploy_wls_operator[0],
│   on provisioners.tf line 141, in resource "null_resource" "deploy_wls_operator":
│  141:   provisioner "local-exec" {
│
│ Error running command '## Copyright © 2021, Oracle and/or its affiliates.
│ ## All rights reserved. The Universal Permissive License (UPL), Version 1.0 as shown at http://oss.oracle.com/licenses/upl
│
│ if [[ ! $(kubectl get serviceaccount weblogic-operator -n opns) ]]; then
│   kubectl create serviceaccount -n opns weblogic-operator;
│ fi
│
│ # wait for at least 1 node to be ready
│
│ while [[ $(for i in $(kubectl get nodes -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}'); do if [[ "$i" == "True" ]]; then echo $i;
│ fi; done | wc -l | tr -d " ") -lt 1 ]]; do
│     echo "waiting for at least 1 node to be ready..." && sleep 1;
│ done
│
│ CHART_VERSION=3.4.0
│
│ helm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts --force-update
│
│ helm install weblogic-operator weblogic-operator/weblogic-operator \
│   --version $CHART_VERSION \
│   --namespace opns \
│   --set image=ghcr.io/oracle/weblogic-kubernetes-operator:$CHART_VERSION \
│   --set serviceAccount=weblogic-operator \
│   --set "domainNamespaces={soans}" \
│   --wait \
│   --timeout 600s || exit 1
│
│ while [[ ! $(kubectl get customresourcedefinition domains.weblogic.oracle -n opns) ]]; do
│   echo "Waiting for CRD to be created";
│   sleep 1;
│ done
│
│ echo "WebLogic Operator is installed and running"
│ ': exit status 1. Output: Error from server (NotFound): namespaces "opns" not found
│ /bin/sh: 4: [[: not found
│ /bin/sh: 10: [[: not found
│ /bin/sh: 10: [[: not found
│ /bin/sh: 10: [[: not found
│ /bin/sh: 10: [[: not found
│ WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
│ WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
│ "weblogic-operator" has been added to your repositories
│ WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
│ WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
│ Error: INSTALLATION FAILED: create: failed to create: namespaces "opns" not found
│

There is also quota error (I raised this to OCI administrators and I'm waiting for extension):

 Error: 400-QuotaExceeded, The following compartment quotas were exceeded: vm-standard2-ocpu-count in policy 'ocid1.quota.oc1..aaaaaaaa5kmvk5n44tacr5mixbjs5fztaxxcf4jkti5mndzutltoftc7ucna' by 2. You can locate the quota policies by searching for the quota IDs in the console.
│ Suggestion: Contact your administrator to increase limit for your account or compartment for this service: Database Db System
│ Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/database_db_system
│ API Reference: https://docs.oracle.com/iaas/api/#/en/database/20160918/DbSystem/LaunchDbSystem
│ Request Target: POST https://database.eu-frankfurt-1.oraclecloud.com/20160918/dbSystems
│ Provider version: 4.86.1, released on 2022-07-28.
│ Service: Database Db System
│ Operation Name: LaunchDbSystem
│ OPC request ID: 112c5e242a79c7e55fa9a89033adde20/3B75AFDD6FD7F9C48DF1245B83239FB5/02AE486AA1C28AE94E0E0256F646902B
│
│
│   with module.database.oci_database_db_system.db_system[0],
│   on modules/database/main.tf line 4, in resource "oci_database_db_system" "db_system":
│    4: resource "oci_database_db_system" "db_system" {
│
╵

Could you review and confirm first error is due to quota/policy issue or it is independent, please? Thank you.

Regards, Piotr Michalski Oracle ACS

Michalski-Piotr commented 2 years ago

Hello, The quota/policy issue has been solved, but the error still is the same:

│ Error: INSTALLATION FAILED: create: failed to create: namespaces "opns" not found

Regards, Piotr

Michalski-Piotr commented 2 years ago

Hello, feedback received from the SOA K8S Engineering Team:

Hi , This looks some issue with current SHELL not supporting some of the commands. Check what the current shell with command "echo $SHELL". Also try to run bash and then the terraform apply on bash terminal.

I'm checking this possible solution right now. Regards, Piotr

Michalski-Piotr commented 2 years ago

Hello, On the machine used for deployment default $SHELL was /bin/bash.

I was trying to repeat "terraform apply" without destroying the entire stack. The error is the same:

Confirmation that bash has been used for provisioning:

$ echo $SHELL
/bin/bash
$  bash

$ echo $SHELL
/bin/bash

Full apply output:

$ terraform apply
module.database.data.oci_identity_availability_domains.ads: Reading...
data.oci_identity_tenancy.tenancy: Reading...
module.fss.data.oci_identity_availability_domain.ad: Reading...
module.vcn.data.oci_identity_availability_domains.ads: Reading...
module.vcn.oci_core_virtual_network.vcn: Refreshing state... [id=ocid1.vcn.oc1.eu-frankfurt-1.amaaaaaajekywzaafg3xfr2ygyqmdtyoznedoncjevmn6hlovipol2oqg4ha]
module.node_pools.data.oci_identity_availability_domains.ads: Reading...
module.node_pools.data.oci_core_images.compatible_images[0]: Reading...
data.oci_identity_tenancy.tenancy: Read complete after 1s [id=ocid1.tenancy.oc1..aaaaaaaa4z6qchwbv6vxjgcnpnn6ofwa264xeto737fv3oyll5w3jm6hsenq]
module.vcn.data.oci_identity_availability_domains.ads: Read complete after 1s [id=IdentityAvailabilityDomainsDataSource-3881852365]
module.database.data.oci_identity_availability_domains.ads: Read complete after 1s [id=IdentityAvailabilityDomainsDataSource-3881852365]
module.node_pools.data.oci_identity_availability_domains.ads: Read complete after 1s [id=IdentityAvailabilityDomainsDataSource-3881852365]
module.fss.data.oci_identity_availability_domain.ad: Read complete after 1s [id=ocid1.availabilitydomain.oc1..aaaaaaaaiifj24st3w4j7cowuo3pmqcuqwjapjv435vtjmgh5j7q3flguwna]
module.fss.oci_file_storage_file_system.fss[0]: Refreshing state... [id=ocid1.filesystem.oc1.eu_frankfurt_1.aaaaaaaaaaaqhe2tmzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.vcn.oci_core_security_list.lb_sl: Refreshing state... [id=ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaatollzmcy6shnjxcr3w4kksxt6pqqfdubecvub7jwpozz4xsfon5q]
module.vcn.oci_core_internet_gateway.igw: Refreshing state... [id=ocid1.internetgateway.oc1.eu-frankfurt-1.aaaaaaaaq6ir43xn5mctdpcrgmqgdwo6u6mvveuugyjeaqd4za5kib3kvzgq]
module.vcn.oci_core_security_list.database_sl[0]: Refreshing state... [id=ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaao4wsn7n2x7v4a5chmggch7srsbeu3buyq3lgv2gq6wzlke3qjfpq]
module.vcn.oci_core_nat_gateway.natgw: Refreshing state... [id=ocid1.natgateway.oc1.eu-frankfurt-1.aaaaaaaax2vpbzto6ennascb56zrs4wjs4yhbfhjt6vx2u7j2cokhol5molq]
module.vcn.oci_core_security_list.node_sl: Refreshing state... [id=ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaaoepngz7znof4rzajnz3osjxpevse5q2u33qvkcmew3bkmfvhpvoa]
module.vcn.oci_core_route_table.private_rt: Refreshing state... [id=ocid1.routetable.oc1.eu-frankfurt-1.aaaaaaaaoz7wjqzt56nkj5aqqqd7abufaq7gpybava6zxwja3l3wk4kouz4a]
module.vcn.oci_core_route_table.public_rt: Refreshing state... [id=ocid1.routetable.oc1.eu-frankfurt-1.aaaaaaaawzp4dz2xlocj6nnaqelpdyslp7v5txkbq3dr6slwhzd6lusoy5na]
module.vcn.oci_core_subnet.database_subnet[0]: Refreshing state... [id=ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaa7gnbpvf3fm7n6fol2nisemk53txixkhaxxeck3bj4724zjn3vnva]
module.vcn.oci_core_subnet.cluster_nodes_subnet: Refreshing state... [id=ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaax3w34zbagchvyxnxzvfko24nc7plsx2d2pl34amynbjoznuh45aa]
module.vcn.oci_core_subnet.cluster_lb_subnet: Refreshing state... [id=ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaa46dnlai4zal6c323ey6drqug27cnpikoq6ahsbh25bha7rpikn5q]
module.database.oci_database_db_system.db_system[0]: Refreshing state... [id=ocid1.dbsystem.oc1.eu-frankfurt-1.antheljsjekywzaapvw3u5sdn37ydbyjdadj2bbee3jh7cerey6hlf2izukq]
module.fss.oci_file_storage_mount_target.mount_target[0]: Refreshing state... [id=ocid1.mounttarget.oc1.eu_frankfurt_1.aaaaaby27vgdkdnumzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.cluster.oci_containerengine_cluster.cluster[0]: Refreshing state... [id=ocid1.cluster.oc1.eu-frankfurt-1.aaaaaaaauqdyan42hbofmnwsuqyrmwvjrtobpy3oaka7iqqlhcuvcbp7i44q]
module.fss.oci_file_storage_export_set.export_set[0]: Refreshing state... [id=ocid1.exportset.oc1.eu_frankfurt_1.aaaaaby27vgdkdntmzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.fss.data.oci_core_private_ip.private_ip: Reading...
module.fss.oci_file_storage_export.export: Refreshing state... [id=ocid1.export.oc1.eu_frankfurt_1.aaaaacvippypxh7qmzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.fss.data.oci_core_private_ip.private_ip: Read complete after 0s [id=ocid1.privateip.oc1.eu-frankfurt-1.aaaaaaaasqsphwpoktn5nfij4xhdlx5f5o4xjuxrup7pewruxdca7ydcjnpa]
module.node_pools.data.oci_containerengine_node_pool_option.node_pool_options: Reading...
module.cluster.data.oci_containerengine_cluster_kube_config.cluster_kube_config: Reading...
module.node_pools.data.oci_core_images.compatible_images[0]: Read complete after 3s [id=CoreImagesDataSource-3142788072]
module.cluster.data.oci_containerengine_cluster_kube_config.cluster_kube_config: Read complete after 1s [id=ContainerengineClusterKubeConfigDataSource-250260075]
local_file.helm_values: Refreshing state... [id=1e708c149f90f71bc8ccfd678fff2db657d2f953]
module.node_pools.data.oci_containerengine_node_pool_option.node_pool_options: Read complete after 2s [id=ContainerengineNodePoolOptionDataSource-2623010328]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Refreshing state... [id=ocid1.nodepool.oc1.eu-frankfurt-1.aaaaaaaauezlydr3shlys7vocvhygj6nck6rppfpr67d7tfpensg7irlcafq]
null_resource.cluster_kube_config[0]: Refreshing state... [id=8506941482674676182]
null_resource.create_traefik_namespace[0]: Refreshing state... [id=7898223861332946798]
null_resource.oke_admin_service_account[0]: Refreshing state... [id=1166223484105046719]
null_resource.create_soa_namespace: Refreshing state... [id=2605782112018274617]
null_resource.create_wls_operator_namespace[0]: Refreshing state... [id=3606830972021746524]
null_resource.create_soa_domain_secret[0]: Refreshing state... [id=2471724828970032088]
null_resource.docker_registry: Refreshing state... [id=5827597938346151013]
null_resource.create_rcu_secret[0]: Refreshing state... [id=4192842873903446330]
null_resource.deploy_wls_operator[0]: Refreshing state... [id=2640875542677050487]
null_resource.deploy_traefik[0]: Refreshing state... [id=2055648309370367003]
null_resource.create_db_secret[0]: Refreshing state... [id=4823638757264159887]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # null_resource.deploy_wls_operator[0] is tainted, so must be replaced
-/+ resource "null_resource" "deploy_wls_operator" {
      ~ id       = "2640875542677050487" -> (known after apply)
        # (1 unchanged attribute hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

null_resource.deploy_wls_operator[0]: Destroying... [id=2640875542677050487]
null_resource.deploy_wls_operator[0]: Destruction complete after 0s
null_resource.deploy_wls_operator[0]: Creating...
null_resource.deploy_wls_operator[0]: Provisioning with 'local-exec'...
null_resource.deploy_wls_operator[0] (local-exec): Executing: ["/bin/sh" "-c" "## Copyright © 2021, Oracle and/or its affiliates. \n## All rights reserved. The Universal Permissive License (UPL), Version 1.0 as shown at http://oss.oracle.com/licenses/upl\n\nif [[ ! $(kubectl get serviceaccount weblogic-operator -n opns) ]]; then\n  kubectl create serviceaccount -n opns weblogic-operator;\nfi\n\n# wait for at least 1 node to be ready\n\nwhile [[ $(for i in $(kubectl get nodes -o 'jsonpath={..status.conditions[?(@.type==\"Ready\")].status}'); do if [[ \"$i\" == \"True\" ]]; then echo $i; fi; done | wc -l | tr -d \" \") -lt 1 ]]; do\n    echo \"waiting for at least 1 node to be ready...\" && sleep 1;\ndone\n\nCHART_VERSION=3.4.0\n\nhelm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts --force-update\n\nhelm install weblogic-operator weblogic-operator/weblogic-operator \\\n  --version $CHART_VERSION \\\n  --namespace opns \\\n  --set image=ghcr.io/oracle/weblogic-kubernetes-operator:$CHART_VERSION \\\n  --set serviceAccount=weblogic-operator \\\n  --set \"domainNamespaces={soans}\" \\\n  --wait \\\n  --timeout 600s || exit 1\n\nwhile [[ ! $(kubectl get customresourcedefinition domains.weblogic.oracle -n opns) ]]; do\n  echo \"Waiting for CRD to be created\";\n  sleep 1;\ndone\n\necho \"WebLogic Operator is installed and running\"\n"]
module.database.oci_database_db_system.db_system[0]: Modifying... [id=ocid1.dbsystem.oc1.eu-frankfurt-1.antheljsjekywzaapvw3u5sdn37ydbyjdadj2bbee3jh7cerey6hlf2izukq]
null_resource.deploy_wls_operator[0] (local-exec): Error from server (NotFound): namespaces "opns" not found
null_resource.deploy_wls_operator[0] (local-exec): /bin/sh: 4: [[: not found
null_resource.deploy_wls_operator[0] (local-exec): /bin/sh: 10: [[: not found
null_resource.deploy_wls_operator[0] (local-exec): /bin/sh: 10: [[: not found
null_resource.deploy_wls_operator[0] (local-exec): /bin/sh: 10: [[: not found
null_resource.deploy_wls_operator[0] (local-exec): /bin/sh: 10: [[: not found
null_resource.deploy_wls_operator[0] (local-exec): WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.deploy_wls_operator[0] (local-exec): WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.deploy_wls_operator[0] (local-exec): "weblogic-operator" has been added to your repositories
null_resource.deploy_wls_operator[0] (local-exec): WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.deploy_wls_operator[0] (local-exec): WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.deploy_wls_operator[0]: Still creating... [10s elapsed]
null_resource.deploy_wls_operator[0] (local-exec): Error: INSTALLATION FAILED: create: failed to create: namespaces "opns" not found
╷
│ Error: local-exec provisioner error
│
│   with null_resource.deploy_wls_operator[0],
│   on provisioners.tf line 141, in resource "null_resource" "deploy_wls_operator":
│  141:   provisioner "local-exec" {
│
│ Error running command '## Copyright © 2021, Oracle and/or its affiliates.
│ ## All rights reserved. The Universal Permissive License (UPL), Version 1.0 as shown at http://oss.oracle.com/licenses/upl
│
│ if [[ ! $(kubectl get serviceaccount weblogic-operator -n opns) ]]; then
│   kubectl create serviceaccount -n opns weblogic-operator;
│ fi
│
│ # wait for at least 1 node to be ready
│
│ while [[ $(for i in $(kubectl get nodes -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}'); do if [[ "$i" == "True" ]]; then echo $i;
│ fi; done | wc -l | tr -d " ") -lt 1 ]]; do
│     echo "waiting for at least 1 node to be ready..." && sleep 1;
│ done
│
│ CHART_VERSION=3.4.0
│
│ helm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts --force-update
│
│ helm install weblogic-operator weblogic-operator/weblogic-operator \
│   --version $CHART_VERSION \
│   --namespace opns \
│   --set image=ghcr.io/oracle/weblogic-kubernetes-operator:$CHART_VERSION \
│   --set serviceAccount=weblogic-operator \
│   --set "domainNamespaces={soans}" \
│   --wait \
│   --timeout 600s || exit 1
│
│ while [[ ! $(kubectl get customresourcedefinition domains.weblogic.oracle -n opns) ]]; do
│   echo "Waiting for CRD to be created";
│   sleep 1;
│ done
│
│ echo "WebLogic Operator is installed and running"
│ ': exit status 1. Output: Error from server (NotFound): namespaces "opns" not found
│ /bin/sh: 4: [[: not found
│ /bin/sh: 10: [[: not found
│ /bin/sh: 10: [[: not found
│ /bin/sh: 10: [[: not found
│ /bin/sh: 10: [[: not found
│ WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
│ WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
│ "weblogic-operator" has been added to your repositories
│ WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
│ WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
│ Error: INSTALLATION FAILED: create: failed to create: namespaces "opns" not found
│
╵

I will destroy entire stack and recreate it one more time.

Regards, Piotr Michalski Oracle ACS

Michalski-Piotr commented 2 years ago

Hello, full stack reprovisioning is also giving the same error:

$ echo $SHELL
/bin/bash

$ terraform apply
module.database.data.oci_identity_availability_domains.ads: Reading...
module.vcn.data.oci_identity_availability_domains.ads: Reading...
data.oci_identity_tenancy.tenancy: Reading...
module.node_pools.data.oci_identity_availability_domains.ads: Reading...
module.fss.data.oci_identity_availability_domain.ad: Reading...
module.node_pools.data.oci_core_images.compatible_images[0]: Reading...
module.database.data.oci_identity_availability_domains.ads: Read complete after 0s [id=IdentityAvailabilityDomainsDataSource-3881852365]
module.node_pools.data.oci_identity_availability_domains.ads: Read complete after 0s [id=IdentityAvailabilityDomainsDataSource-3881852365]
module.vcn.data.oci_identity_availability_domains.ads: Read complete after 0s [id=IdentityAvailabilityDomainsDataSource-3881852365]
data.oci_identity_tenancy.tenancy: Read complete after 1s [id=ocid1.tenancy.oc1..aaaaaaaa4z6qchwbv6vxjgcnpnn6ofwa264xeto737fv3oyll5w3jm6hsenq]
module.fss.data.oci_identity_availability_domain.ad: Read complete after 1s [id=ocid1.availabilitydomain.oc1..aaaaaaaaiifj24st3w4j7cowuo3pmqcuqwjapjv435vtjmgh5j7q3flguwna]
module.node_pools.data.oci_core_images.compatible_images[0]: Read complete after 2s [id=CoreImagesDataSource-3142788072]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  # local_file.helm_values will be created
  + resource "local_file" "helm_values" {
      + content              = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./fromtf.auto.yaml"
      + id                   = (known after apply)
    }

  # null_resource.cluster_kube_config[0] will be created
  + resource "null_resource" "cluster_kube_config" {
      + id = (known after apply)
    }

  # null_resource.create_db_secret[0] will be created
  + resource "null_resource" "create_db_secret" {
      + id       = (known after apply)
      + triggers = {
          + "name"      = "soa_domain-db-credentials"
          + "namespace" = "soans"
          + "password"  = (sensitive)
          + "username"  = "SYS"
        }
    }

  # null_resource.create_rcu_secret[0] will be created
  + resource "null_resource" "create_rcu_secret" {
      + id       = (known after apply)
      + triggers = {
          + "name"      = "soa_domain-rcu-credentials"
          + "namespace" = "soans"
          + "password"  = (sensitive)
          + "username"  = "rcu"
        }
    }

  # null_resource.create_soa_domain_secret[0] will be created
  + resource "null_resource" "create_soa_domain_secret" {
      + id       = (known after apply)
      + triggers = {
          + "name"      = "soa_domain-domain-credentials"
          + "namespace" = "soans"
          + "password"  = (sensitive)
          + "username"  = "weblogic"
        }
    }

  # null_resource.create_soa_namespace will be created
  + resource "null_resource" "create_soa_namespace" {
      + id       = (known after apply)
      + triggers = {
          + "soa_kubernetes_namespace" = "soans"
        }
    }

  # null_resource.create_traefik_namespace[0] will be created
  + resource "null_resource" "create_traefik_namespace" {
      + id       = (known after apply)
      + triggers = {
          + "ingress_namespace" = "traefik"
        }
    }

  # null_resource.create_wls_operator_namespace[0] will be created
  + resource "null_resource" "create_wls_operator_namespace" {
      + id       = (known after apply)
      + triggers = {
          + "weblogic_operator_namespace" = "opns"
        }
    }

  # null_resource.deploy_traefik[0] will be created
  + resource "null_resource" "deploy_traefik" {
      + id       = (known after apply)
      + triggers = {
          + "ingress_namespace" = "traefik"
          + "soa_namespace"     = "soans"
        }
    }

  # null_resource.deploy_wls_operator[0] will be created
  + resource "null_resource" "deploy_wls_operator" {
      + id       = (known after apply)
      + triggers = {
          + "soa_namespace"               = "soans"
          + "weblogic_operator_namespace" = "opns"
        }
    }

  # null_resource.docker_registry will be created
  + resource "null_resource" "docker_registry" {
      + id       = (known after apply)
      + triggers = {
          + "soa_kubernetes_namespace" = "soans"
        }
    }

  # null_resource.oke_admin_service_account[0] will be created
  + resource "null_resource" "oke_admin_service_account" {
      + id = (known after apply)
    }

  # module.cluster.data.oci_containerengine_cluster_kube_config.cluster_kube_config will be read during apply
  # (config refers to values not yet known)
 <= data "oci_containerengine_cluster_kube_config" "cluster_kube_config" {
      + cluster_id    = (known after apply)
      + content       = (known after apply)
      + expiration    = 2592000
      + id            = (known after apply)
      + token_version = "2.0.0"
    }

  # module.cluster.oci_containerengine_cluster.cluster[0] will be created
  + resource "oci_containerengine_cluster" "cluster" {
      + available_kubernetes_upgrades = (known after apply)
      + compartment_id                = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags                  = (known after apply)
      + endpoints                     = (known after apply)
      + freeform_tags                 = (known after apply)
      + id                            = (known after apply)
      + kms_key_id                    = (known after apply)
      + kubernetes_version            = "v1.23.4"
      + lifecycle_details             = (known after apply)
      + metadata                      = (known after apply)
      + name                          = "SOA-k8s-cluster"
      + state                         = (known after apply)
      + vcn_id                        = (known after apply)

      + cluster_pod_network_options {
          + cni_type = (known after apply)
        }

      + image_policy_config {
          + is_policy_enabled = (known after apply)

          + key_details {
              + kms_key_id = (known after apply)
            }
        }

      + options {
          + service_lb_subnet_ids = (known after apply)

          + add_ons {
              + is_kubernetes_dashboard_enabled = true
              + is_tiller_enabled               = true
            }

          + admission_controller_options {
              + is_pod_security_policy_enabled = (known after apply)
            }

          + kubernetes_network_config {
              + pods_cidr     = "10.27.0.0/16"
              + services_cidr = "10.28.0.0/16"
            }

          + persistent_volume_config {
              + defined_tags  = (known after apply)
              + freeform_tags = (known after apply)
            }

          + service_lb_config {
              + defined_tags  = (known after apply)
              + freeform_tags = (known after apply)
            }
        }
    }

  # module.database.oci_database_db_system.db_system[0] will be created
  + resource "oci_database_db_system" "db_system" {
      + availability_domain                     = "Rwod:EU-FRANKFURT-1-AD-1"
      + backup_subnet_id                        = (known after apply)
      + cluster_name                            = (known after apply)
      + compartment_id                          = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + cpu_core_count                          = 4
      + data_storage_percentage                 = (known after apply)
      + data_storage_size_in_gb                 = 256
      + database_edition                        = "ENTERPRISE_EDITION"
      + defined_tags                            = (known after apply)
      + disk_redundancy                         = (known after apply)
      + display_name                            = (known after apply)
      + domain                                  = (known after apply)
      + fault_domains                           = (known after apply)
      + freeform_tags                           = (known after apply)
      + hostname                                = "db"
      + id                                      = (known after apply)
      + iorm_config_cache                       = (known after apply)
      + kms_key_id                              = (known after apply)
      + kms_key_version_id                      = (known after apply)
      + last_maintenance_run_id                 = (known after apply)
      + last_patch_history_entry_id             = (known after apply)
      + license_model                           = "LICENSE_INCLUDED"
      + lifecycle_details                       = (known after apply)
      + listener_port                           = (known after apply)
      + maintenance_window                      = (known after apply)
      + memory_size_in_gbs                      = (known after apply)
      + next_maintenance_run_id                 = (known after apply)
      + node_count                              = 1
      + point_in_time_data_disk_clone_timestamp = (known after apply)
      + private_ip                              = (known after apply)
      + reco_storage_size_in_gb                 = (known after apply)
      + scan_dns_name                           = (known after apply)
      + scan_dns_record_id                      = (known after apply)
      + scan_ip_ids                             = (known after apply)
      + shape                                   = "VM.Standard2.4"
      + source                                  = (known after apply)
      + source_db_system_id                     = (known after apply)
      + sparse_diskgroup                        = (known after apply)
      + ssh_public_keys                         = [
          + "ssh-rsa MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxMgaz+TovYkhlNsEic/PQEcnr/xGPTrVy4ThVEFIyPyZibNw/4vPJ4x77gu6Pjb3UKYglqCJ7dibGRPOf/AFFfJu/i1LKO1Q4boihTsrzMNveLtG/gLiMHwvF/pBeaBh30Lrh9WL74UN8W4/9Vfbepm3GLb5334gmD0cp2hn44YQYK9Us0eEv2XcfE/RWlAc+nxTL8tbFCdXaANc2qxR4V3g8+dSfFqSNQXeUOdEqJxVsGbTCwW7AjtRztx9ZhV8FcNOl6lU92Htj9qwLPNyE1tdN1MXV6h/OnxZKmgglDZCKEkJzgJqfmneNxHUpo8QTMjyDk7fSgE7bbcTpQh7wIDAQAB",
        ]
      + state                                   = (known after apply)
      + storage_volume_performance_mode         = (known after apply)
      + subnet_id                               = (known after apply)
      + time_created                            = (known after apply)
      + time_zone                               = (known after apply)
      + version                                 = (known after apply)
      + vip_ids                                 = (known after apply)
      + zone_id                                 = (known after apply)

      + db_home {
          + create_async                = false
          + database_software_image_id  = (known after apply)
          + db_home_location            = (known after apply)
          + db_version                  = "19.0.0.0"
          + defined_tags                = (known after apply)
          + display_name                = "SOA"
          + freeform_tags               = (known after apply)
          + id                          = (known after apply)
          + last_patch_history_entry_id = (known after apply)
          + lifecycle_details           = (known after apply)
          + state                       = (known after apply)
          + time_created                = (known after apply)

          + database {
              + admin_password                        = (sensitive value)
              + backup_id                             = (known after apply)
              + backup_tde_password                   = (sensitive value)
              + character_set                         = (known after apply)
              + connection_strings                    = (known after apply)
              + database_id                           = (known after apply)
              + database_software_image_id            = (known after apply)
              + db_domain                             = (known after apply)
              + db_name                               = "SOA"
              + db_unique_name                        = (known after apply)
              + db_workload                           = "OLTP"
              + defined_tags                          = (known after apply)
              + freeform_tags                         = (known after apply)
              + id                                    = (known after apply)
              + kms_key_id                            = (known after apply)
              + kms_key_version_id                    = (known after apply)
              + lifecycle_details                     = (known after apply)
              + ncharacter_set                        = (known after apply)
              + pdb_name                              = "pdb"
              + state                                 = (known after apply)
              + tde_wallet_password                   = (sensitive value)
              + time_created                          = (known after apply)
              + time_stamp_for_point_in_time_recovery = (known after apply)
              + vault_id                              = (known after apply)

              + db_backup_config {
                  + auto_backup_enabled     = (known after apply)
                  + auto_backup_window      = (known after apply)
                  + recovery_window_in_days = (known after apply)

                  + backup_destination_details {
                      + id   = (known after apply)
                      + type = (known after apply)
                    }
                }
            }
        }

      + db_system_options {
          + storage_management = "LVM"
        }
    }

  # module.fss.data.oci_core_private_ip.private_ip will be read during apply
  # (config refers to values not yet known)
 <= data "oci_core_private_ip" "private_ip" {
      + availability_domain = (known after apply)
      + compartment_id      = (known after apply)
      + defined_tags        = (known after apply)
      + display_name        = (known after apply)
      + freeform_tags       = (known after apply)
      + hostname_label      = (known after apply)
      + id                  = (known after apply)
      + ip_address          = (known after apply)
      + is_primary          = (known after apply)
      + is_reserved         = (known after apply)
      + private_ip_id       = (known after apply)
      + subnet_id           = (known after apply)
      + time_created        = (known after apply)
      + vlan_id             = (known after apply)
      + vnic_id             = (known after apply)
    }

  # module.fss.oci_file_storage_export.export will be created
  + resource "oci_file_storage_export" "export" {
      + export_set_id  = (known after apply)
      + file_system_id = (known after apply)
      + id             = (known after apply)
      + path           = "/soa_domains"
      + state          = (known after apply)
      + time_created   = (known after apply)

      + export_options {
          + access                         = "READ_WRITE"
          + anonymous_gid                  = (known after apply)
          + anonymous_uid                  = (known after apply)
          + identity_squash                = "NONE"
          + require_privileged_source_port = false
          + source                         = "10.0.10.0/24"
        }
    }

  # module.fss.oci_file_storage_export_set.export_set[0] will be created
  + resource "oci_file_storage_export_set" "export_set" {
      + availability_domain = (known after apply)
      + compartment_id      = (known after apply)
      + display_name        = "Oracle SOA Export Set for SOA Domains"
      + id                  = (known after apply)
      + max_fs_stat_bytes   = (known after apply)
      + max_fs_stat_files   = (known after apply)
      + mount_target_id     = (known after apply)
      + state               = (known after apply)
      + time_created        = (known after apply)
      + vcn_id              = (known after apply)
    }

  # module.fss.oci_file_storage_file_system.fss[0] will be created
  + resource "oci_file_storage_file_system" "fss" {
      + availability_domain = "Rwod:EU-FRANKFURT-1-AD-2"
      + compartment_id      = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags        = (known after apply)
      + display_name        = "Oracle SOA File System"
      + freeform_tags       = (known after apply)
      + id                  = (known after apply)
      + is_clone_parent     = (known after apply)
      + is_hydrated         = (known after apply)
      + lifecycle_details   = (known after apply)
      + metered_bytes       = (known after apply)
      + source_details      = (known after apply)
      + source_snapshot_id  = (known after apply)
      + state               = (known after apply)
      + time_created        = (known after apply)
    }

  # module.fss.oci_file_storage_mount_target.mount_target[0] will be created
  + resource "oci_file_storage_mount_target" "mount_target" {
      + availability_domain = "Rwod:EU-FRANKFURT-1-AD-2"
      + compartment_id      = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags        = (known after apply)
      + display_name        = "Oracle SOA Mount Target"
      + export_set_id       = (known after apply)
      + freeform_tags       = (known after apply)
      + hostname_label      = (known after apply)
      + id                  = (known after apply)
      + ip_address          = (known after apply)
      + lifecycle_details   = (known after apply)
      + nsg_ids             = (known after apply)
      + private_ip_ids      = (known after apply)
      + state               = (known after apply)
      + subnet_id           = (known after apply)
      + time_created        = (known after apply)
    }

  # module.node_pools.data.oci_containerengine_node_pool_option.node_pool_options will be read during apply
  # (config refers to values not yet known)
 <= data "oci_containerengine_node_pool_option" "node_pool_options" {
      + id                  = (known after apply)
      + images              = (known after apply)
      + kubernetes_versions = (known after apply)
      + node_pool_option_id = (known after apply)
      + shapes              = (known after apply)
      + sources             = (known after apply)
    }

  # module.node_pools.oci_containerengine_node_pool.node_pool[0] will be created
  + resource "oci_containerengine_node_pool" "node_pool" {
      + cluster_id          = (known after apply)
      + compartment_id      = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags        = (known after apply)
      + freeform_tags       = (known after apply)
      + id                  = (known after apply)
      + kubernetes_version  = "v1.23.4"
      + lifecycle_details   = (known after apply)
      + name                = "pool1"
      + node_image_id       = (known after apply)
      + node_image_name     = (known after apply)
      + node_metadata       = (known after apply)
      + node_shape          = "VM.Standard2.4"
      + node_source         = (known after apply)
      + nodes               = (known after apply)
      + quantity_per_subnet = (known after apply)
      + ssh_public_key      = "ssh-rsa MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxMgaz+TovYkhlNsEic/PQEcnr/xGPTrVy4ThVEFIyPyZibNw/4vPJ4x77gu6Pjb3UKYglqCJ7dibGRPOf/AFFfJu/i1LKO1Q4boihTsrzMNveLtG/gLiMHwvF/pBeaBh30Lrh9WL74UN8W4/9Vfbepm3GLb5334gmD0cp2hn44YQYK9Us0eEv2XcfE/RWlAc+nxTL8tbFCdXaANc2qxR4V3g8+dSfFqSNQXeUOdEqJxVsGbTCwW7AjtRztx9ZhV8FcNOl6lU92Htj9qwLPNyE1tdN1MXV6h/OnxZKmgglDZCKEkJzgJqfmneNxHUpo8QTMjyDk7fSgE7bbcTpQh7wIDAQAB"
      + state               = (known after apply)
      + subnet_ids          = (known after apply)

      + initial_node_labels {
          + key   = "pool_name"
          + value = "pool1"
        }

      + node_config_details {
          + defined_tags                        = (known after apply)
          + freeform_tags                       = (known after apply)
          + is_pv_encryption_in_transit_enabled = (known after apply)
          + kms_key_id                          = (known after apply)
          + nsg_ids                             = (known after apply)
          + size                                = 3

          + node_pool_pod_network_option_details {
              + cni_type          = (known after apply)
              + max_pods_per_node = (known after apply)
              + pod_nsg_ids       = (known after apply)
              + pod_subnet_ids    = (known after apply)
            }

          + placement_configs {
              + availability_domain     = "Rwod:EU-FRANKFURT-1-AD-1"
              + capacity_reservation_id = (known after apply)
              + fault_domains           = (known after apply)
              + subnet_id               = (known after apply)
            }
          + placement_configs {
              + availability_domain     = "Rwod:EU-FRANKFURT-1-AD-2"
              + capacity_reservation_id = (known after apply)
              + fault_domains           = (known after apply)
              + subnet_id               = (known after apply)
            }
          + placement_configs {
              + availability_domain     = "Rwod:EU-FRANKFURT-1-AD-3"
              + capacity_reservation_id = (known after apply)
              + fault_domains           = (known after apply)
              + subnet_id               = (known after apply)
            }
        }

      + node_eviction_node_pool_settings {
          + eviction_grace_duration              = (known after apply)
          + is_force_delete_after_grace_duration = (known after apply)
        }

      + node_shape_config {
          + memory_in_gbs = (known after apply)
          + ocpus         = (known after apply)
        }

      + node_source_details {
          + boot_volume_size_in_gbs = (known after apply)
          + image_id                = (known after apply)
          + source_type             = (known after apply)
        }
    }

  # module.vcn.oci_core_internet_gateway.igw will be created
  + resource "oci_core_internet_gateway" "igw" {
      + compartment_id = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags   = (known after apply)
      + display_name   = "internet-gateway"
      + enabled        = true
      + freeform_tags  = (known after apply)
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + state          = (known after apply)
      + time_created   = (known after apply)
      + vcn_id         = (known after apply)
    }

  # module.vcn.oci_core_nat_gateway.natgw will be created
  + resource "oci_core_nat_gateway" "natgw" {
      + block_traffic  = (known after apply)
      + compartment_id = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags   = (known after apply)
      + display_name   = "nat-gateway"
      + freeform_tags  = (known after apply)
      + id             = (known after apply)
      + nat_ip         = (known after apply)
      + public_ip_id   = (known after apply)
      + route_table_id = (known after apply)
      + state          = (known after apply)
      + time_created   = (known after apply)
      + vcn_id         = (known after apply)
    }

  # module.vcn.oci_core_route_table.private_rt will be created
  + resource "oci_core_route_table" "private_rt" {
      + compartment_id = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags   = (known after apply)
      + display_name   = "private-subnet-rt-table"
      + freeform_tags  = (known after apply)
      + id             = (known after apply)
      + state          = (known after apply)
      + time_created   = (known after apply)
      + vcn_id         = (known after apply)

      + route_rules {
          + cidr_block        = (known after apply)
          + description       = (known after apply)
          + destination       = "0.0.0.0/0"
          + destination_type  = "CIDR_BLOCK"
          + network_entity_id = (known after apply)
          + route_type        = (known after apply)
        }
    }

  # module.vcn.oci_core_route_table.public_rt will be created
  + resource "oci_core_route_table" "public_rt" {
      + compartment_id = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags   = (known after apply)
      + display_name   = "public-subnet-rt-table"
      + freeform_tags  = (known after apply)
      + id             = (known after apply)
      + state          = (known after apply)
      + time_created   = (known after apply)
      + vcn_id         = (known after apply)

      + route_rules {
          + cidr_block        = (known after apply)
          + description       = (known after apply)
          + destination       = "0.0.0.0/0"
          + destination_type  = "CIDR_BLOCK"
          + network_entity_id = (known after apply)
          + route_type        = (known after apply)
        }
    }

  # module.vcn.oci_core_security_list.database_sl[0] will be created
  + resource "oci_core_security_list" "database_sl" {
      + compartment_id = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags   = (known after apply)
      + display_name   = "database-security-list"
      + freeform_tags  = (known after apply)
      + id             = (known after apply)
      + state          = (known after apply)
      + time_created   = (known after apply)
      + vcn_id         = (known after apply)

      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "6"
          + source      = "10.0.10.0/24"
          + source_type = (known after apply)
          + stateless   = false

          + tcp_options {
              + max = 1521
              + min = 1521
            }
        }
    }

  # module.vcn.oci_core_security_list.lb_sl will be created
  + resource "oci_core_security_list" "lb_sl" {
      + compartment_id = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags   = (known after apply)
      + display_name   = "lb-security-list"
      + freeform_tags  = (known after apply)
      + id             = (known after apply)
      + state          = (known after apply)
      + time_created   = (known after apply)
      + vcn_id         = (known after apply)

      + egress_security_rules {
          + description      = (known after apply)
          + destination      = "0.0.0.0/0"
          + destination_type = (known after apply)
          + protocol         = "6"
          + stateless        = true
        }

      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "6"
          + source      = "0.0.0.0/0"
          + source_type = (known after apply)
          + stateless   = true
        }
    }

  # module.vcn.oci_core_security_list.node_sl will be created
  + resource "oci_core_security_list" "node_sl" {
      + compartment_id = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags   = (known after apply)
      + display_name   = "nodes-security-list"
      + freeform_tags  = (known after apply)
      + id             = (known after apply)
      + state          = (known after apply)
      + time_created   = (known after apply)
      + vcn_id         = (known after apply)

      + egress_security_rules {
          + description      = (known after apply)
          + destination      = "0.0.0.0/0"
          + destination_type = (known after apply)
          + protocol         = "6"
          + stateless        = false
        }
      + egress_security_rules {
          + description      = (known after apply)
          + destination      = "10.0.0.0/16"
          + destination_type = (known after apply)
          + protocol         = "17"
          + stateless        = false

          + udp_options {
              + max = 111
              + min = 111
            }
        }
      + egress_security_rules {
          + description      = (known after apply)
          + destination      = "10.0.0.0/16"
          + destination_type = (known after apply)
          + protocol         = "6"
          + stateless        = false

          + tcp_options {
              + max = 111
              + min = 111
            }
        }
      + egress_security_rules {
          + description      = (known after apply)
          + destination      = "10.0.0.0/16"
          + destination_type = (known after apply)
          + protocol         = "6"
          + stateless        = false

          + tcp_options {
              + max = 2050
              + min = 2048
            }
        }
      + egress_security_rules {
          + description      = (known after apply)
          + destination      = "10.0.10.0/24"
          + destination_type = (known after apply)
          + protocol         = "all"
          + stateless        = true
        }
      + egress_security_rules {
          + description      = (known after apply)
          + destination      = "10.0.20.0/24"
          + destination_type = (known after apply)
          + protocol         = "all"
          + stateless        = true
        }

      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "1"
          + source      = "0.0.0.0/0"
          + source_type = (known after apply)
          + stateless   = false

          + icmp_options {
              + code = 4
              + type = 3
            }
        }
      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "17"
          + source      = "0.0.0.0/0"
          + source_type = (known after apply)
          + stateless   = false

          + udp_options {
              + max = 53
              + min = 53
            }
        }
      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "17"
          + source      = "10.0.0.0/16"
          + source_type = (known after apply)
          + stateless   = false

          + udp_options {
              + max = 111
              + min = 111
            }
        }
      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "17"
          + source      = "10.0.0.0/16"
          + source_type = (known after apply)
          + stateless   = false

          + udp_options {
              + max = 2048
              + min = 2048
            }
        }
      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "6"
          + source      = "0.0.0.0/0"
          + source_type = (known after apply)
          + stateless   = false

          + tcp_options {
              + max = 22
              + min = 22
            }
        }
      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "6"
          + source      = "0.0.0.0/0"
          + source_type = (known after apply)
          + stateless   = false

          + tcp_options {
              + max = 32767
              + min = 30000
            }
        }
      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "6"
          + source      = "10.0.0.0/16"
          + source_type = (known after apply)
          + stateless   = false

          + tcp_options {
              + max = 111
              + min = 111
            }
        }
      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "6"
          + source      = "10.0.0.0/16"
          + source_type = (known after apply)
          + stateless   = false

          + tcp_options {
              + max = 2050
              + min = 2048
            }
        }
      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "all"
          + source      = "10.0.10.0/24"
          + source_type = (known after apply)
          + stateless   = true
        }
      + ingress_security_rules {
          + description = (known after apply)
          + protocol    = "all"
          + source      = "10.0.20.0/24"
          + source_type = (known after apply)
          + stateless   = true
        }
    }

  # module.vcn.oci_core_subnet.cluster_lb_subnet will be created
  + resource "oci_core_subnet" "cluster_lb_subnet" {
      + availability_domain        = (known after apply)
      + cidr_block                 = "10.0.20.0/24"
      + compartment_id             = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags               = (known after apply)
      + dhcp_options_id            = (known after apply)
      + display_name               = "lb-public-subnet"
      + dns_label                  = "lb"
      + freeform_tags              = (known after apply)
      + id                         = (known after apply)
      + ipv6cidr_block             = (known after apply)
      + ipv6cidr_blocks            = (known after apply)
      + ipv6virtual_router_ip      = (known after apply)
      + prohibit_internet_ingress  = (known after apply)
      + prohibit_public_ip_on_vnic = (known after apply)
      + route_table_id             = (known after apply)
      + security_list_ids          = (known after apply)
      + state                      = (known after apply)
      + subnet_domain_name         = (known after apply)
      + time_created               = (known after apply)
      + vcn_id                     = (known after apply)
      + virtual_router_ip          = (known after apply)
      + virtual_router_mac         = (known after apply)
    }

  # module.vcn.oci_core_subnet.cluster_nodes_subnet will be created
  + resource "oci_core_subnet" "cluster_nodes_subnet" {
      + availability_domain        = (known after apply)
      + cidr_block                 = "10.0.10.0/24"
      + compartment_id             = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags               = (known after apply)
      + dhcp_options_id            = (known after apply)
      + display_name               = "nodes-private-subnet"
      + dns_label                  = "nodes"
      + freeform_tags              = (known after apply)
      + id                         = (known after apply)
      + ipv6cidr_block             = (known after apply)
      + ipv6cidr_blocks            = (known after apply)
      + ipv6virtual_router_ip      = (known after apply)
      + prohibit_internet_ingress  = (known after apply)
      + prohibit_public_ip_on_vnic = true
      + route_table_id             = (known after apply)
      + security_list_ids          = (known after apply)
      + state                      = (known after apply)
      + subnet_domain_name         = (known after apply)
      + time_created               = (known after apply)
      + vcn_id                     = (known after apply)
      + virtual_router_ip          = (known after apply)
      + virtual_router_mac         = (known after apply)
    }

  # module.vcn.oci_core_subnet.database_subnet[0] will be created
  + resource "oci_core_subnet" "database_subnet" {
      + availability_domain        = (known after apply)
      + cidr_block                 = "10.0.30.0/24"
      + compartment_id             = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + defined_tags               = (known after apply)
      + dhcp_options_id            = (known after apply)
      + display_name               = "db-private-subnet"
      + dns_label                  = "db"
      + freeform_tags              = (known after apply)
      + id                         = (known after apply)
      + ipv6cidr_block             = (known after apply)
      + ipv6cidr_blocks            = (known after apply)
      + ipv6virtual_router_ip      = (known after apply)
      + prohibit_internet_ingress  = (known after apply)
      + prohibit_public_ip_on_vnic = true
      + route_table_id             = (known after apply)
      + security_list_ids          = (known after apply)
      + state                      = (known after apply)
      + subnet_domain_name         = (known after apply)
      + time_created               = (known after apply)
      + vcn_id                     = (known after apply)
      + virtual_router_ip          = (known after apply)
      + virtual_router_mac         = (known after apply)
    }

  # module.vcn.oci_core_virtual_network.vcn will be created
  + resource "oci_core_virtual_network" "vcn" {
      + byoipv6cidr_blocks               = (known after apply)
      + cidr_block                       = "10.0.0.0/16"
      + cidr_blocks                      = (known after apply)
      + compartment_id                   = "ocid1.compartment.oc1..aaaaaaaabppas6upet73aqiuefdg44ja5koxihu5ikvthcvca7sr2jckoa6a"
      + default_dhcp_options_id          = (known after apply)
      + default_route_table_id           = (known after apply)
      + default_security_list_id         = (known after apply)
      + defined_tags                     = (known after apply)
      + display_name                     = "oke-vcn"
      + dns_label                        = "oke"
      + freeform_tags                    = (known after apply)
      + id                               = (known after apply)
      + ipv6cidr_blocks                  = (known after apply)
      + ipv6private_cidr_blocks          = (known after apply)
      + is_ipv6enabled                   = (known after apply)
      + is_oracle_gua_allocation_enabled = (known after apply)
      + state                            = (known after apply)
      + time_created                     = (known after apply)
      + vcn_domain_name                  = (known after apply)

      + byoipv6cidr_details {
          + byoipv6range_id = (known after apply)
          + ipv6cidr_block  = (known after apply)
        }
    }

Plan: 30 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + jdbc_connection_url = (known after apply)
  + kube_config         = (known after apply)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.fss.oci_file_storage_file_system.fss[0]: Creating...
module.vcn.oci_core_virtual_network.vcn: Creating...
module.vcn.oci_core_virtual_network.vcn: Creation complete after 1s [id=ocid1.vcn.oc1.eu-frankfurt-1.amaaaaaajekywzaamccbm5ivo5lrsaz6d3ijqyuad4xpc2oecks27wabmvha]
module.vcn.oci_core_security_list.database_sl[0]: Creating...
module.vcn.oci_core_nat_gateway.natgw: Creating...
module.vcn.oci_core_internet_gateway.igw: Creating...
module.vcn.oci_core_security_list.lb_sl: Creating...
module.vcn.oci_core_security_list.node_sl: Creating...
module.vcn.oci_core_security_list.database_sl[0]: Creation complete after 1s [id=ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaa7oqcsu5px5t2sgjvwgc3yses6zg7y5o5oeani7wvp6xg4kqcxgla]
module.vcn.oci_core_security_list.lb_sl: Creation complete after 1s [id=ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaaohvgfmaqci2zxkttq77h6ayx4hut2r2rze6kjrhph5tmamrt4cza]
module.vcn.oci_core_security_list.node_sl: Creation complete after 1s [id=ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaa3bn6cn7g6vzswsnejuctyazcvyoxfcmmnflm45u4qesrbjmj7faq]
module.fss.oci_file_storage_file_system.fss[0]: Creation complete after 2s [id=ocid1.filesystem.oc1.eu_frankfurt_1.aaaaaaaaaaaqhpkpmzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.vcn.oci_core_internet_gateway.igw: Creation complete after 1s [id=ocid1.internetgateway.oc1.eu-frankfurt-1.aaaaaaaauphyxx5h4ffbvnz47sj54s6lh4hxcnht2d5b6fzgmyswhx3imiqa]
module.vcn.oci_core_route_table.public_rt: Creating...
module.vcn.oci_core_route_table.public_rt: Creation complete after 1s [id=ocid1.routetable.oc1.eu-frankfurt-1.aaaaaaaamjysxzrmosxqatfmoyo5zi27z2ymxtdobfgcegji5ie4macdud6q]
module.vcn.oci_core_subnet.cluster_lb_subnet: Creating...
module.vcn.oci_core_nat_gateway.natgw: Creation complete after 2s [id=ocid1.natgateway.oc1.eu-frankfurt-1.aaaaaaaamufa6a3hygpkisnhumq6zr7m54uetrlrvtke5oyupl2znnjmdrta]
module.vcn.oci_core_route_table.private_rt: Creating...
module.vcn.oci_core_route_table.private_rt: Creation complete after 1s [id=ocid1.routetable.oc1.eu-frankfurt-1.aaaaaaaav2bw3gbfsgob6gmm5yoznyoss545ocb6yfepzqy6nn5pskr3a74a]
module.vcn.oci_core_subnet.database_subnet[0]: Creating...
module.vcn.oci_core_subnet.cluster_nodes_subnet: Creating...
module.vcn.oci_core_subnet.cluster_lb_subnet: Provisioning with 'local-exec'...
module.vcn.oci_core_subnet.cluster_lb_subnet (local-exec): Executing: ["/bin/sh" "-c" "sleep 5"]
module.vcn.oci_core_subnet.database_subnet[0]: Provisioning with 'local-exec'...
module.vcn.oci_core_subnet.database_subnet[0] (local-exec): Executing: ["/bin/sh" "-c" "sleep 5"]
module.vcn.oci_core_subnet.cluster_nodes_subnet: Provisioning with 'local-exec'...
module.vcn.oci_core_subnet.cluster_nodes_subnet (local-exec): Executing: ["/bin/sh" "-c" "sleep 5"]
module.vcn.oci_core_subnet.cluster_lb_subnet: Creation complete after 9s [id=ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaahlrj67e2ur4ao2pqsinymyeb2rrqhz4jlor4hy72ejutq3adgbia]
module.cluster.oci_containerengine_cluster.cluster[0]: Creating...
module.vcn.oci_core_subnet.database_subnet[0]: Creation complete after 9s [id=ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaa2hjixdklxvtkpkjitaiphgp6lym2jviruz3x3agrzygcqoq63npq]
module.database.oci_database_db_system.db_system[0]: Creating...
module.vcn.oci_core_subnet.cluster_nodes_subnet: Creation complete after 10s [id=ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaailkugnxjftmyr633qnvp47zkb5cjnbfutohlw2fkopogh6devq4a]
module.fss.oci_file_storage_mount_target.mount_target[0]: Creating...
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [10s elapsed]
module.fss.oci_file_storage_mount_target.mount_target[0]: Still creating... [10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [20s elapsed]
module.fss.oci_file_storage_mount_target.mount_target[0]: Still creating... [20s elapsed]
module.fss.oci_file_storage_mount_target.mount_target[0]: Creation complete after 20s [id=ocid1.mounttarget.oc1.eu_frankfurt_1.aaaaaby27vgdmfhymzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.fss.data.oci_core_private_ip.private_ip: Reading...
module.fss.oci_file_storage_export_set.export_set[0]: Creating...
module.fss.oci_file_storage_export_set.export_set[0]: Creation complete after 0s [id=ocid1.exportset.oc1.eu_frankfurt_1.aaaaaby27vgdmfhxmzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.fss.oci_file_storage_export.export: Creating...
module.fss.data.oci_core_private_ip.private_ip: Read complete after 0s [id=ocid1.privateip.oc1.eu-frankfurt-1.aaaaaaaawcwcrgpfnfaftwyxjp6loxit27gexmvjf3fodpebbefyelzwgxhq]
module.fss.oci_file_storage_export.export: Creation complete after 3s [id=ocid1.export.oc1.eu_frankfurt_1.aaaaaa4np2tw7j7jmzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [1m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [1m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [1m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [1m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [1m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [1m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [1m30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [1m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [1m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [1m40s elapsed]

module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [1m50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [1m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [2m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [2m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [2m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [2m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [2m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [2m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [2m30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [2m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [2m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [2m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [2m50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [2m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [3m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [3m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [3m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [3m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [3m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [3m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [3m30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [3m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [3m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [3m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [3m50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [3m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [4m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [4m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [4m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [4m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [4m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [4m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [4m30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [4m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [4m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [4m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [4m50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [4m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [5m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [5m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [5m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [5m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [5m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [5m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [5m30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [5m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [5m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [5m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [5m50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [5m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [6m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [6m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [6m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [6m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [6m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [6m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [6m30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [6m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [6m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [6m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [6m50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [6m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [7m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [7m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [7m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [7m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [7m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [7m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [7m30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [7m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [7m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [7m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [7m50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [7m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [8m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [8m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [8m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [8m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [8m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [8m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [8m30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [8m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [8m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [8m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [8m50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [8m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [9m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [9m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [9m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [9m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [9m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [9m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [9m30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [9m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [9m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [9m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [9m50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [9m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [10m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [10m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [10m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [10m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [10m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [10m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [10m30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [10m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [10m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [10m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [10m50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [10m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [11m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [11m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [11m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [11m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [11m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [11m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [11m30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [11m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [11m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [11m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [11m51s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [11m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [12m1s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [12m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [12m11s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [12m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [12m21s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [12m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [12m31s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [12m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [12m41s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [12m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [12m51s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [12m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [13m1s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [13m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [13m11s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [13m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [13m21s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [13m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [13m31s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [13m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [13m41s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [13m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [13m51s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [13m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [14m1s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [14m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [14m11s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [14m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [14m21s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [14m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [14m31s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [14m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [14m41s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [14m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [14m51s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [14m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [15m1s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [15m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [15m11s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [15m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [15m21s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [15m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [15m31s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [15m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [15m41s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [15m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [15m51s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [15m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [16m1s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [16m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [16m11s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [16m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [16m21s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [16m20s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [16m31s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [16m30s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [16m41s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [16m40s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [16m51s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [16m50s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [17m1s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [17m0s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Still creating... [17m11s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [17m10s elapsed]
module.cluster.oci_containerengine_cluster.cluster[0]: Creation complete after 17m13s [id=ocid1.cluster.oc1.eu-frankfurt-1.aaaaaaaaqvjixvtqkcpdbyp54dujmvt2nq3ihmn7yoetnfm7fcpupqp57svq]
module.node_pools.data.oci_containerengine_node_pool_option.node_pool_options: Reading...
module.cluster.data.oci_containerengine_cluster_kube_config.cluster_kube_config: Reading...
module.cluster.data.oci_containerengine_cluster_kube_config.cluster_kube_config: Read complete after 1s [id=ContainerengineClusterKubeConfigDataSource-2069274573]
module.node_pools.data.oci_containerengine_node_pool_option.node_pool_options: Read complete after 2s [id=ContainerengineNodePoolOptionDataSource-1977085663]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Creating...
module.database.oci_database_db_system.db_system[0]: Still creating... [17m20s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [17m30s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [17m40s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [30s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [17m50s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [40s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [18m0s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [50s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [18m10s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [1m0s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [18m20s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [1m10s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [18m30s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [1m20s elapsed]
module.database.oci_database_db_system.db_system[0]: Still creating... [18m40s elapsed]
module.database.oci_database_db_system.db_system[0]: Creation complete after 18m44s [id=ocid1.dbsystem.oc1.eu-frankfurt-1.antheljsjekywzaaygmn63xoh3vy6y64uzswlusdyxrwndfflirp6ncfw6gq]
local_file.helm_values: Creating...
local_file.helm_values: Creation complete after 0s [id=ba6fcd2b75519727fdb3ec37564aedab0ee333f7]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [1m30s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [1m40s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [1m50s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [2m0s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [2m10s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [2m20s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [2m30s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [2m40s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [2m50s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [3m0s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [3m10s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [3m20s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [3m30s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [3m40s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [3m50s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [4m0s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [4m10s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [4m20s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [4m30s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [4m40s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [4m50s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [5m0s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [5m10s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [5m20s elapsed]

module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [5m30s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [5m40s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [5m50s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [6m0s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [6m10s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [6m20s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [6m30s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [6m40s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [6m50s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [7m0s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [7m10s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [7m20s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [7m30s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [7m40s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [7m50s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [8m0s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [8m10s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [8m20s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Still creating... [8m30s elapsed]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Creation complete after 8m33s [id=ocid1.nodepool.oc1.eu-frankfurt-1.aaaaaaaaijeb7fk6zar2xgngjjmko5k5zswowmiuyw7xbr7wcn7nqjxyt4aq]
null_resource.cluster_kube_config[0]: Creating...
null_resource.cluster_kube_config[0]: Provisioning with 'local-exec'...
null_resource.cluster_kube_config[0] (local-exec): Executing: ["/bin/sh" "-c" "## Copyright © 2021, Oracle and/or its affiliates. \n## All rights reserved. The Universal Permissive License (UPL), Version 1.0 as shown at http://oss.oracle.com/licenses/upl\n\nmkdir -p $HOME/.kube/ \noci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.eu-frankfurt-1.aaaaaaaaqvjixvtqkcpdbyp54dujmvt2nq3ihmn7yoetnfm7fcpupqp57svq --file $HOME/.kube/config --region eu-frankfurt-1 --token-version 2.0.0\n"]
null_resource.cluster_kube_config[0] (local-exec): Existing Kubeconfig file found at /home/acs/.kube/config and new config merged into it
null_resource.cluster_kube_config[0]: Creation complete after 1s [id=2318877136514955045]
null_resource.create_traefik_namespace[0]: Creating...
null_resource.oke_admin_service_account[0]: Creating...
null_resource.create_soa_namespace: Creating...
null_resource.create_wls_operator_namespace[0]: Creating...
null_resource.create_wls_operator_namespace[0]: Provisioning with 'local-exec'...
null_resource.create_wls_operator_namespace[0] (local-exec): Executing: ["/bin/sh" "-c" "if [[ ! $(kubectl get ns opns) ]]; then kubectl create namespace opns; fi"]
null_resource.create_traefik_namespace[0]: Provisioning with 'local-exec'...
null_resource.create_traefik_namespace[0] (local-exec): Executing: ["/bin/sh" "-c" "if [[ ! $(kubectl get ns traefik) ]]; then kubectl create namespace traefik; fi"]
null_resource.oke_admin_service_account[0]: Provisioning with 'local-exec'...
null_resource.oke_admin_service_account[0] (local-exec): Executing: ["/bin/sh" "-c" "if [[ ! $(kubectl get sa oke-admin -n kube-system) ]]; then kubectl create -f ./templates/oke-admin.ServiceAccount.yaml; fi"]
null_resource.create_soa_namespace: Provisioning with 'local-exec'...
null_resource.create_soa_namespace (local-exec): Executing: ["/bin/sh" "-c" "if [[ ! $(kubectl get ns soans) ]]; then kubectl create namespace soans; fi"]
null_resource.create_wls_operator_namespace[0] (local-exec): Error from server (NotFound): namespaces "opns" not found
null_resource.create_traefik_namespace[0] (local-exec): Error from server (NotFound): namespaces "traefik" not found
null_resource.create_traefik_namespace[0] (local-exec): /bin/sh: 1: [[: not found
null_resource.create_traefik_namespace[0]: Creation complete after 4s [id=3763699468155573934]
null_resource.create_wls_operator_namespace[0] (local-exec): /bin/sh: 1: [[: not found
null_resource.create_soa_namespace (local-exec): Error from server (NotFound): namespaces "soans" not found
null_resource.create_soa_namespace (local-exec): /bin/sh: 1: [[: not found
null_resource.create_wls_operator_namespace[0]: Creation complete after 4s [id=7267204078128386200]
null_resource.oke_admin_service_account[0] (local-exec): Error from server (NotFound): serviceaccounts "oke-admin" not found
null_resource.oke_admin_service_account[0] (local-exec): /bin/sh: 1: [[: not found
null_resource.oke_admin_service_account[0]: Creation complete after 5s [id=524956604913063007]
null_resource.create_soa_namespace: Creation complete after 5s [id=2268079780660466207]
null_resource.create_soa_domain_secret[0]: Creating...
null_resource.create_db_secret[0]: Creating...
null_resource.deploy_wls_operator[0]: Creating...
null_resource.docker_registry: Creating...
null_resource.deploy_traefik[0]: Creating...
null_resource.create_rcu_secret[0]: Creating...
null_resource.create_db_secret[0]: Provisioning with 'local-exec'...
null_resource.create_db_secret[0] (local-exec): (output suppressed due to sensitive value in config)
null_resource.create_soa_domain_secret[0]: Provisioning with 'local-exec'...
null_resource.create_soa_domain_secret[0] (local-exec): (output suppressed due to sensitive value in config)
null_resource.docker_registry: Provisioning with 'local-exec'...
null_resource.docker_registry (local-exec): Executing: ["/bin/sh" "-c" "## Copyright © 2021, Oracle and/or its affiliates. \n## All rights reserved. The Universal Permissive License (UPL), Version 1.0 as shown at http://oss.oracle.com/licenses/upl\n\nif [[ ! $(kubectl get secret image-secret -n soans) ]]; then\n    kubectl create secret docker-registry image-secret -n soans --docker-server=container-registry.oracle.com --docker-username='piotr.michalski@oracle.com' --docker-password='XXX' --docker-email='piotr.michalski@oracle.com'\nfi"]
null_resource.deploy_wls_operator[0]: Provisioning with 'local-exec'...
null_resource.deploy_wls_operator[0] (local-exec): Executing: ["/bin/sh" "-c" "## Copyright © 2021, Oracle and/or its affiliates. \n## All rights reserved. The Universal Permissive License (UPL), Version 1.0 as shown at http://oss.oracle.com/licenses/upl\n\nif [[ ! $(kubectl get serviceaccount weblogic-operator -n opns) ]]; then\n  kubectl create serviceaccount -n opns weblogic-operator;\nfi\n\n# wait for at least 1 node to be ready\n\nwhile [[ $(for i in $(kubectl get nodes -o 'jsonpath={..status.conditions[?(@.type==\"Ready\")].status}'); do if [[ \"$i\" == \"True\" ]]; then echo $i; fi; done | wc -l | tr -d \" \") -lt 1 ]]; do\n    echo \"waiting for at least 1 node to be ready...\" && sleep 1;\ndone\n\nCHART_VERSION=3.4.0\n\nhelm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts --force-update\n\nhelm install weblogic-operator weblogic-operator/weblogic-operator \\\n  --version $CHART_VERSION \\\n  --namespace opns \\\n  --set image=ghcr.io/oracle/weblogic-kubernetes-operator:$CHART_VERSION \\\n  --set serviceAccount=weblogic-operator \\\n  --set \"domainNamespaces={soans}\" \\\n  --wait \\\n  --timeout 600s || exit 1\n\nwhile [[ ! $(kubectl get customresourcedefinition domains.weblogic.oracle -n opns) ]]; do\n  echo \"Waiting for CRD to be created\";\n  sleep 1;\ndone\n\necho \"WebLogic Operator is installed and running\"\n"]
null_resource.deploy_traefik[0]: Provisioning with 'local-exec'...
null_resource.deploy_traefik[0] (local-exec): Executing: ["/bin/sh" "-c" "## Copyright © 2021, Oracle and/or its affiliates. \n## All rights reserved. The Universal Permissive License (UPL), Version 1.0 as shown at http://oss.oracle.com/licenses/upl\n\nCHART_VERSION=10.19.5\n\nhelm repo add traefik https://helm.traefik.io/traefik\n\nhelm install traefik \\\ntraefik/traefik \\\n--version 10.19.5 \\\n--namespace traefik \\\n--set image.tag=2.6.6 \\\n--set ports.traefik.expose=true \\\n--set ports.web.exposedPort=30305 \\\n--set ports.web.nodePort=30305 \\\n--set ports.websecure.exposedPort=30443 \\\n--set ports.websecure.nodePort=30443 \\\n--set \"kubernetes.namespaces={traefik,soans}\" \\\n--wait\n\necho \"Traefik is installed and running\"\n"]
null_resource.create_rcu_secret[0]: Provisioning with 'local-exec'...
null_resource.create_rcu_secret[0] (local-exec): (output suppressed due to sensitive value in config)
null_resource.deploy_traefik[0] (local-exec): WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.deploy_traefik[0] (local-exec): WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.deploy_traefik[0] (local-exec): "traefik" already exists with the same configuration, skipping
null_resource.deploy_traefik[0] (local-exec): WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.deploy_traefik[0] (local-exec): WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.create_db_secret[0] (local-exec): (output suppressed due to sensitive value in config)
null_resource.create_db_secret[0] (local-exec): (output suppressed due to sensitive value in config)
null_resource.create_db_secret[0]: Creation complete after 7s [id=7368898527932521277]
null_resource.create_rcu_secret[0] (local-exec): (output suppressed due to sensitive value in config)
null_resource.create_rcu_secret[0] (local-exec): (output suppressed due to sensitive value in config)
null_resource.create_rcu_secret[0]: Creation complete after 7s [id=6387971568729978991]
null_resource.docker_registry (local-exec): Error from server (NotFound): namespaces "soans" not found
null_resource.deploy_wls_operator[0] (local-exec): Error from server (NotFound): namespaces "opns" not found
null_resource.create_soa_domain_secret[0] (local-exec): (output suppressed due to sensitive value in config)
null_resource.docker_registry (local-exec): /bin/sh: 4: [[: not found
null_resource.docker_registry: Creation complete after 7s [id=4873574436590788812]
null_resource.create_soa_domain_secret[0] (local-exec): (output suppressed due to sensitive value in config)
null_resource.deploy_wls_operator[0] (local-exec): /bin/sh: 4: [[: not found
null_resource.create_soa_domain_secret[0]: Creation complete after 7s [id=6157815622305706360]
null_resource.deploy_wls_operator[0] (local-exec): /bin/sh: 10: [[: not found
null_resource.deploy_wls_operator[0] (local-exec): /bin/sh: 10: [[: not found
null_resource.deploy_wls_operator[0] (local-exec): /bin/sh: 10: [[: not found
null_resource.deploy_wls_operator[0] (local-exec): /bin/sh: 10: [[: not found
null_resource.deploy_wls_operator[0] (local-exec): WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.deploy_wls_operator[0] (local-exec): WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.deploy_wls_operator[0] (local-exec): "weblogic-operator" has been added to your repositories
null_resource.deploy_wls_operator[0]: Still creating... [10s elapsed]
null_resource.deploy_traefik[0]: Still creating... [10s elapsed]
null_resource.deploy_wls_operator[0] (local-exec): WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.deploy_wls_operator[0] (local-exec): WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
null_resource.deploy_traefik[0] (local-exec): Error: INSTALLATION FAILED: create: failed to create: namespaces "traefik" not found
null_resource.deploy_traefik[0] (local-exec): Traefik is installed and running
null_resource.deploy_traefik[0]: Creation complete after 13s [id=4291969415355477325]
null_resource.deploy_wls_operator[0] (local-exec): Error: INSTALLATION FAILED: create: failed to create: namespaces "opns" not found
╷
│ Error: local-exec provisioner error
│
│   with null_resource.deploy_wls_operator[0],
│   on provisioners.tf line 141, in resource "null_resource" "deploy_wls_operator":
│  141:   provisioner "local-exec" {
│
│ Error running command '## Copyright © 2021, Oracle and/or its affiliates.
│ ## All rights reserved. The Universal Permissive License (UPL), Version 1.0 as shown at http://oss.oracle.com/licenses/upl
│
│ if [[ ! $(kubectl get serviceaccount weblogic-operator -n opns) ]]; then
│   kubectl create serviceaccount -n opns weblogic-operator;
│ fi
│
│ # wait for at least 1 node to be ready
│
│ while [[ $(for i in $(kubectl get nodes -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}'); do if [[ "$i" == "True" ]]; then echo $i;
│ fi; done | wc -l | tr -d " ") -lt 1 ]]; do
│     echo "waiting for at least 1 node to be ready..." && sleep 1;
│ done
│
│ CHART_VERSION=3.4.0
│
│ helm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts --force-update
│
│ helm install weblogic-operator weblogic-operator/weblogic-operator \
│   --version $CHART_VERSION \
│   --namespace opns \
│   --set image=ghcr.io/oracle/weblogic-kubernetes-operator:$CHART_VERSION \
│   --set serviceAccount=weblogic-operator \
│   --set "domainNamespaces={soans}" \
│   --wait \
│   --timeout 600s || exit 1
│
│ while [[ ! $(kubectl get customresourcedefinition domains.weblogic.oracle -n opns) ]]; do
│   echo "Waiting for CRD to be created";
│   sleep 1;
│ done
│
│ echo "WebLogic Operator is installed and running"
│ ': exit status 1. Output: Error from server (NotFound): namespaces "opns" not found
│ /bin/sh: 4: [[: not found
│ /bin/sh: 10: [[: not found
│ /bin/sh: 10: [[: not found
│ /bin/sh: 10: [[: not found
│ /bin/sh: 10: [[: not found
│ WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
│ WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
│ "weblogic-operator" has been added to your repositories
│ WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/acs/.kube/config
│ WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/acs/.kube/config
│ Error: INSTALLATION FAILED: create: failed to create: namespaces "opns" not found
│
╵

I'm attaching both "terraform plan" outputs as well:

= Terraform_apply_with_bash_SHELL_issue_3_full_stack_20220804_1106_terraform_plan.txt

Regards, Piotr Michalski Oracle ACS

Michalski-Piotr commented 2 years ago

Hello, issue is related to "opns" namespace creation process. I appreciate for any further help to solve this issue. Regards, Piotr

Michalski-Piotr commented 2 years ago

Hello, I removed apply files and obfuscated apply output and changed related password, unfotunately the apply output is not secure to be shared. Regards, Piotr

Michalski-Piotr commented 2 years ago

Hello,

SOA Engineering feedback I received:

The errors are seen with local-exec which gets executed on the terminal where the terraform is being executed.

Can you check below on the terminal and see if you are able to see same issue: Provide the bash version with command bash --version

And run the below command on the terminal and see if same error is reported:

/bin/sh -c 'if [[ ! $(kubectl get ns soans) ]]; then kubectl create namespace soans; fi'

Also in our hosts /bin/sh points to bash:

sh-4.2$ ls -l /bin/sh
lrwxrwxrwx 1 root root 4 Mar 19  2021 /bin/sh -> bash
sh-4.2$ ls -l /bin/bash
-rwxr-xr-x 1 root root 964536 Nov 22  2019 /bin/bash
sh-4.2$

Regards, Piotr

Michalski-Piotr commented 2 years ago

Hello,

I changed /bin/sh to /bin/bash, but there is the same error happening:

$ ls -l /bin/sh
lrwxrwxrwx 1 root root 9 sie  4 15:12 /bin/sh -> /bin/bash

$ sh --version
GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2019 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

$ bash --version
GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2019 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ echo $SHELL
/bin/bash

Test with if is working fine:

$ /bin/sh -c 'if [[ ! $(kubectl get ns soans) ]]; then kubectl create namespace soans; fi'
$

terraform apply:


acs@acs-vm:~/FMW-K8S$ terraform apply
module.node_pools.data.oci_identity_availability_domains.ads: Reading...
data.oci_identity_tenancy.tenancy: Reading...
module.vcn.data.oci_identity_availability_domains.ads: Reading...
module.node_pools.data.oci_core_images.compatible_images[0]: Reading...
module.database.data.oci_identity_availability_domains.ads: Reading...
module.fss.data.oci_identity_availability_domain.ad: Reading...
module.vcn.oci_core_virtual_network.vcn: Refreshing state... [id=ocid1.vcn.oc1.eu-frankfurt-1.amaaaaaajekywzaamccbm5ivo5lrsaz6d3ijqyuad4xpc2oecks27wabmvha]
module.vcn.oci_core_security_list.database_sl[0]: Refreshing state... [id=ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaa7oqcsu5px5t2sgjvwgc3yses6zg7y5o5oeani7wvp6xg4kqcxgla]
module.vcn.oci_core_internet_gateway.igw: Refreshing state... [id=ocid1.internetgateway.oc1.eu-frankfurt-1.aaaaaaaauphyxx5h4ffbvnz47sj54s6lh4hxcnht2d5b6fzgmyswhx3imiqa]
module.vcn.oci_core_nat_gateway.natgw: Refreshing state... [id=ocid1.natgateway.oc1.eu-frankfurt-1.aaaaaaaamufa6a3hygpkisnhumq6zr7m54uetrlrvtke5oyupl2znnjmdrta]
module.vcn.oci_core_security_list.node_sl: Refreshing state... [id=ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaa3bn6cn7g6vzswsnejuctyazcvyoxfcmmnflm45u4qesrbjmj7faq]
module.vcn.oci_core_security_list.lb_sl: Refreshing state... [id=ocid1.securitylist.oc1.eu-frankfurt-1.aaaaaaaaohvgfmaqci2zxkttq77h6ayx4hut2r2rze6kjrhph5tmamrt4cza]
data.oci_identity_tenancy.tenancy: Read complete after 0s [id=ocid1.tenancy.oc1..aaaaaaaa4z6qchwbv6vxjgcnpnn6ofwa264xeto737fv3oyll5w3jm6hsenq]
module.vcn.oci_core_route_table.private_rt: Refreshing state... [id=ocid1.routetable.oc1.eu-frankfurt-1.aaaaaaaav2bw3gbfsgob6gmm5yoznyoss545ocb6yfepzqy6nn5pskr3a74a]
module.node_pools.data.oci_identity_availability_domains.ads: Read complete after 0s [id=IdentityAvailabilityDomainsDataSource-3881852365]
module.database.data.oci_identity_availability_domains.ads: Read complete after 1s [id=IdentityAvailabilityDomainsDataSource-3881852365]
module.fss.data.oci_identity_availability_domain.ad: Read complete after 1s [id=ocid1.availabilitydomain.oc1..aaaaaaaaiifj24st3w4j7cowuo3pmqcuqwjapjv435vtjmgh5j7q3flguwna]
module.fss.oci_file_storage_file_system.fss[0]: Refreshing state... [id=ocid1.filesystem.oc1.eu_frankfurt_1.aaaaaaaaaaaqhpkpmzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.vcn.data.oci_identity_availability_domains.ads: Read complete after 1s [id=IdentityAvailabilityDomainsDataSource-3881852365]
module.vcn.oci_core_route_table.public_rt: Refreshing state... [id=ocid1.routetable.oc1.eu-frankfurt-1.aaaaaaaamjysxzrmosxqatfmoyo5zi27z2ymxtdobfgcegji5ie4macdud6q]
module.vcn.oci_core_subnet.database_subnet[0]: Refreshing state... [id=ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaa2hjixdklxvtkpkjitaiphgp6lym2jviruz3x3agrzygcqoq63npq]
module.vcn.oci_core_subnet.cluster_nodes_subnet: Refreshing state... [id=ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaailkugnxjftmyr633qnvp47zkb5cjnbfutohlw2fkopogh6devq4a]
module.database.oci_database_db_system.db_system[0]: Refreshing state... [id=ocid1.dbsystem.oc1.eu-frankfurt-1.antheljsjekywzaaygmn63xoh3vy6y64uzswlusdyxrwndfflirp6ncfw6gq]
module.fss.oci_file_storage_mount_target.mount_target[0]: Refreshing state... [id=ocid1.mounttarget.oc1.eu_frankfurt_1.aaaaaby27vgdmfhymzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.vcn.oci_core_subnet.cluster_lb_subnet: Refreshing state... [id=ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaahlrj67e2ur4ao2pqsinymyeb2rrqhz4jlor4hy72ejutq3adgbia]
module.cluster.oci_containerengine_cluster.cluster[0]: Refreshing state... [id=ocid1.cluster.oc1.eu-frankfurt-1.aaaaaaaaqvjixvtqkcpdbyp54dujmvt2nq3ihmn7yoetnfm7fcpupqp57svq]
module.fss.data.oci_core_private_ip.private_ip: Reading...
module.fss.oci_file_storage_export_set.export_set[0]: Refreshing state... [id=ocid1.exportset.oc1.eu_frankfurt_1.aaaaaby27vgdmfhxmzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.fss.data.oci_core_private_ip.private_ip: Read complete after 0s [id=ocid1.privateip.oc1.eu-frankfurt-1.aaaaaaaawcwcrgpfnfaftwyxjp6loxit27gexmvjf3fodpebbefyelzwgxhq]
module.fss.oci_file_storage_export.export: Refreshing state... [id=ocid1.export.oc1.eu_frankfurt_1.aaaaaa4np2tw7j7jmzzgcllqojxwiotfouwwm4tbnzvwm5lsoqwtcllbmqwtcaaa]
module.cluster.data.oci_containerengine_cluster_kube_config.cluster_kube_config: Reading...
module.node_pools.data.oci_containerengine_node_pool_option.node_pool_options: Reading...
module.node_pools.data.oci_core_images.compatible_images[0]: Read complete after 2s [id=CoreImagesDataSource-3142788072]
local_file.helm_values: Refreshing state... [id=ba6fcd2b75519727fdb3ec37564aedab0ee333f7]
module.cluster.data.oci_containerengine_cluster_kube_config.cluster_kube_config: Read complete after 0s [id=ContainerengineClusterKubeConfigDataSource-2069274573]
module.node_pools.data.oci_containerengine_node_pool_option.node_pool_options: Read complete after 1s [id=ContainerengineNodePoolOptionDataSource-1977085663]
module.node_pools.oci_containerengine_node_pool.node_pool[0]: Refreshing state... [id=ocid1.nodepool.oc1.eu-frankfurt-1.aaaaaaaaijeb7fk6zar2xgngjjmko5k5zswowmiuyw7xbr7wcn7nqjxyt4aq]
null_resource.cluster_kube_config[0]: Refreshing state... [id=2318877136514955045]
null_resource.oke_admin_service_account[0]: Refreshing state... [id=524956604913063007]
null_resource.create_soa_namespace: Refreshing state... [id=2268079780660466207]
null_resource.create_wls_operator_namespace[0]: Refreshing state... [id=7267204078128386200]
null_resource.create_traefik_namespace[0]: Refreshing state... [id=3763699468155573934]
null_resource.create_soa_domain_secret[0]: Refreshing state... [id=6157815622305706360]
null_resource.docker_registry: Refreshing state... [id=4873574436590788812]
null_resource.deploy_traefik[0]: Refreshing state... [id=4291969415355477325]
null_resource.deploy_wls_operator[0]: Refreshing state... [id=2084980736525641810]
null_resource.create_rcu_secret[0]: Refreshing state... [id=6387971568729978991]
null_resource.create_db_secret[0]: Refreshing state... [id=7368898527932521277]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # null_resource.deploy_wls_operator[0] is tainted, so must be replaced
-/+ resource "null_resource" "deploy_wls_operator" {
      ~ id       = "2084980736525641810" -> (known after apply)
        # (1 unchanged attribute hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

null_resource.deploy_wls_operator[0]: Destroying... [id=2084980736525641810]
null_resource.deploy_wls_operator[0]: Destruction complete after 0s
null_resource.deploy_wls_operator[0]: Creating...
null_resource.deploy_wls_operator[0]: Provisioning with 'local-exec'...
null_resource.deploy_wls_operator[0] (local-exec): Executing: ["/bin/sh" "-c" "## Copyright © 2021, Oracle and/or its affiliates. \n## All rights reserved. The Universal Permissive License (UPL), Version 1.0 as shown at http://oss.oracle.com/licenses/upl\n\nif [[ ! $(kubectl get serviceaccount weblogic-operator -n opns) ]]; then\n  kubectl create serviceaccount -n opns weblogic-operator;\nfi\n\n# wait for at least 1 node to be ready\n\nwhile [[ $(for i in $(kubectl get nodes -o 'jsonpath={..status.conditions[?(@.type==\"Ready\")].status}'); do if [[ \"$i\" == \"True\" ]]; then echo $i; fi; done | wc -l | tr -d \" \") -lt 1 ]]; do\n    echo \"waiting for at least 1 node to be ready...\" && sleep 1;\ndone\n\nCHART_VERSION=3.4.0\n\nhelm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts --force-update\n\nhelm install weblogic-operator weblogic-operator/weblogic-operator \\\n  --version $CHART_VERSION \\\n  --namespace opns \\\n  --set image=ghcr.io/oracle/weblogic-kubernetes-operator:$CHART_VERSION \\\n  --set serviceAccount=weblogic-operator \\\n  --set \"domainNamespaces={soans}\" \\\n  --wait \\\n  --timeout 600s || exit 1\n\nwhile [[ ! $(kubectl get customresourcedefinition domains.weblogic.oracle -n opns) ]]; do\n  echo \"Waiting for CRD to be created\";\n  sleep 1;\ndone\n\necho \"WebLogic Operator is installed and running\"\n"]
null_resource.deploy_wls_operator[0] (local-exec): Error from server (NotFound): namespaces "opns" not found
null_resource.deploy_wls_operator[0] (local-exec): error: failed to create serviceaccount: namespaces "opns" not found
null_resource.deploy_wls_operator[0] (local-exec): "weblogic-operator" has been added to your repositories
null_resource.deploy_wls_operator[0]: Still creating... [10s elapsed]
null_resource.deploy_wls_operator[0] (local-exec): Error: INSTALLATION FAILED: create: failed to create: namespaces "opns" not found

╷
│ Error: local-exec provisioner error
│
│   with null_resource.deploy_wls_operator[0],
│   on provisioners.tf line 141, in resource "null_resource" "deploy_wls_operator":
│  141:   provisioner "local-exec" {
│
│ Error running command '## Copyright © 2021, Oracle and/or its affiliates.
│ ## All rights reserved. The Universal Permissive License (UPL), Version 1.0 as shown at http://oss.oracle.com/licenses/upl
│
│ if [[ ! $(kubectl get serviceaccount weblogic-operator -n opns) ]]; then
│   kubectl create serviceaccount -n opns weblogic-operator;
│ fi
│
│ # wait for at least 1 node to be ready
│
│ while [[ $(for i in $(kubectl get nodes -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}'); do if [[ "$i" == "True" ]]; then echo $i;
│ fi; done | wc -l | tr -d " ") -lt 1 ]]; do
│     echo "waiting for at least 1 node to be ready..." && sleep 1;
│ done
│
│ CHART_VERSION=3.4.0
│
│ helm repo add weblogic-operator https://oracle.github.io/weblogic-kubernetes-operator/charts --force-update
│
│ helm install weblogic-operator weblogic-operator/weblogic-operator \
│   --version $CHART_VERSION \
│   --namespace opns \
│   --set image=ghcr.io/oracle/weblogic-kubernetes-operator:$CHART_VERSION \
│   --set serviceAccount=weblogic-operator \
│   --set "domainNamespaces={soans}" \
│   --wait \
│   --timeout 600s || exit 1
│
│ while [[ ! $(kubectl get customresourcedefinition domains.weblogic.oracle -n opns) ]]; do
│   echo "Waiting for CRD to be created";
│   sleep 1;
│ done
│
│ echo "WebLogic Operator is installed and running"
│ ': exit status 1. Output: Error from server (NotFound): namespaces "opns" not found
│ error: failed to create serviceaccount: namespaces "opns" not found
│ "weblogic-operator" has been added to your repositories
│ Error: INSTALLATION FAILED: create: failed to create: namespaces "opns" not found
│

Kindly notice the first error is:

Error running command '## Copyright © 2021, Oracle and/or its affiliates.
│ ## All rights reserved. The Universal Permissive License (UPL), Version 1.0 as shown at http://oss.oracle.com/licenses/upl
│

This is in the part

null_resource.deploy_wls_operator[0] (local-exec): Executing: ["/bin/sh" "-c" "## Copyright © 2021, Oracle and/or its affiliates. \n## All rights reserved

It looks like there is an attempt to execute some script comment "## Copyright © 2021, Oracle and/or its affiliates. \n## All rights reserved" as unix command, while this is some script comment only. The namespace creation attempt is executed later, so this could be failing do to above command is returning error exit state.

The same feedback I'm providing to SOA Engineering Team.

Regards, Piotr Michalski Oracle ACS

Michalski-Piotr commented 2 years ago

Hello, issue has been fixed thanks to the SOA K8S Kubernetes Engineering team with the following changes: 1) fix /bin/sh to point to /bin/bash on Ubuntu box 2) tainted all failed resources 3) Apply changes in tfvars.template:

Thank you very much for your help. I'm closing this issue, it is related to the deployment itself, not a code.

Regards, Piotr