Closed Demonsthere closed 6 months ago
Hello,
Completely identical behaviour, but in case of Node Pool. Cluster itself is ready/synced. Both resources are brand new.
apiVersion: container.gcp.upbound.io/v1beta1
kind: Cluster
metadata:
annotations:
meta.upbound.io/example-id: container/v1beta1/cluster
labels:
testing.upbound.io/example-name: cp-intconn-cluster
name: cp-intconn-cluster
spec:
forProvider:
addonsConfig:
- configConnectorConfig:
- enabled: true
gcePersistentDiskCsiDriverConfig:
- enabled: true
gcpFilestoreCsiDriverConfig:
- enabled: true
horizontalPodAutoscaling:
- disabled: false
httpLoadBalancing:
- disabled: false
networkPolicyConfig:
- disabled: false
binaryAuthorization:
- evaluationMode: DISABLED
clusterAutoscaling:
- autoProvisioningDefaults:
- serviceAccount: "<name>@<project>.iam.gserviceaccount.com"
enabled: false
enableIntranodeVisibility: false
enableShieldedNodes: true
initialNodeCount: 1
ipAllocationPolicy:
- clusterIpv4CidrBlock:
servicesIpv4CidrBlock:
location: europe-west3
loggingConfig:
- enableComponents:
- SYSTEM_COMPONENTS
- WORKLOADS
masterAuth:
- clientCertificateConfig:
- issueClientCertificate: false
masterAuthorizedNetworksConfig:
- cidrBlocks:
- cidrBlock: "<network>/<prefix>"
minMasterVersion: "1.27.4"
monitoringConfig:
- enableComponents:
- SYSTEM_COMPONENTS
managedPrometheus:
- enabled: true
network: "projects/<project>/global/networks/<network>"
subnetwork: "projects/<project>/regions/europe-west3/subnetworks/<subnet>"
networkPolicy:
- enabled: true
networkingMode: VPC_NATIVE
privateClusterConfig:
- enablePrivateNodes: true
enablePrivateEndpoint: false
masterGlobalAccessConfig:
- enabled: true
masterIpv4CidrBlock: 172.16.0.0/28
releaseChannel:
- channel: REGULAR
removeDefaultNodePool: true
workloadIdentityConfig:
- workloadPool: "<project>.svc.id.goog"
writeConnectionSecretToRef:
name: gke-conn
namespace: crossplane-test
---
apiVersion: container.gcp.upbound.io/v1beta1
kind: NodePool
metadata:
annotations:
meta.upbound.io/example-id: container/v1beta1/nodepool
labels:
testing.upbound.io/example-name: nodepool-1
name: nodepool-1
spec:
forProvider:
autoscaling:
- locationPolicy: BALANCED
minNodeCount: 0
maxNodeCount: 1
cluster: cp-intconn-cluster
clusterSelector:
matchLabels:
testing.upbound.io/example-name: cp-intconn-cluster
location: europe-west3
management:
- autoRepair: true
autoUpgrade: true
nodeConfig:
- diskSizeGb: 120
diskType: pd-ssd
imageType: COS_CONTAINERD
machineType: e2-highcpu-4
preemptible: true
serviceAccount: "<name>@<project>.iam.gserviceaccount.com"
serviceAccountSelector:
matchLabels:
testing.upbound.io/example-name: <name>
workloadMetadataConfig:
- mode: GKE_METADATA
nodeLocations:
- "europe-west3-a"
- "europe-west3-b"
- "europe-west3-c"
upgradeSettings:
- maxSurge: 1
maxUnavailable: 0
version: "1.27.4"
This Node pool is created successfully and a second or two after that becomes ready/unsynced under the following conditions:
conditions:
- lastTransitionTime: '2023-09-24T15:06:42Z'
reason: Available
status: 'True'
type: Ready
- lastTransitionTime: '2023-09-24T15:06:45Z'
message: >-
observe failed: cannot run plan: plan failed: Instance cannot be
destroyed: Resource google_container_node_pool.nodepool-1 has
lifecycle.prevent_destroy set, but the plan calls for this resource to
be destroyed. To avoid this error and continue with the plan, either
disable lifecycle.prevent_destroy or reduce the scope of the plan using
the -target flag.
reason: ReconcileError
status: 'False'
type: Synced
- lastTransitionTime: '2023-09-24T15:06:39Z'
message: >-
apply failed: NodePool nodepool-1 was created in the error state
"ERROR":
reason: ApplyFailure
status: 'False'
type: LastAsyncOperation
- lastTransitionTime: '2023-09-24T15:06:39Z'
reason: Finished
status: 'True'
type: AsyncOperation
clientVersion:
buildDate: "2023-09-13T09:35:49Z"
compiler: gc
gitCommit: 89a4ea3e1e4ddd7f7572286090359983e0387b2f
gitTreeState: clean
gitVersion: v1.28.2
goVersion: go1.20.8
major: "1"
minor: "28"
platform: linux/amd64
kustomizeVersion: v5.0.4-0.20230601165947-6ce0bf390ce3
serverVersion:
buildDate: "2023-08-15T21:24:51Z"
compiler: gc
gitCommit: 855e7c48de7388eb330da0f8d9d2394ee818fb8d
gitTreeState: clean
gitVersion: v1.28.0
goVersion: go1.20.7
major: "1"
minor: "28"
platform: linux/amd64
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
crossplane-control-plane Ready control-plane 34d v1.28.0 172.18.0.2 <none> Debian GNU/Linux 11 (bullseye) 6.5.5-060505-generic containerd://1.7.1
crossplane-worker Ready <none> 34d v1.28.0 172.18.0.3 <none> Debian GNU/Linux 11 (bullseye) 6.5.5-060505-generic containerd://1.7.1
This provider repo does not have enough maintainers to address every issue. Since there has been no activity in the last 90 days it is now marked as stale
. It will be closed in 14 days if no further activity occurs. Leaving a comment starting with /fresh
will mark this issue as not stale.
This issue is being closed since there has been no activity for 14 days since marking it as stale
. If you still need help, feel free to comment or reopen the issue!
What happened?
Hello there, So i have been trying to migrate my existing crossplane managed infra using
crossplane/provider-gcp:v0.21.0
to this. Firstly I am trying to simply create a new gke cluster using your provider.This cluster is created as expected, but the resource is never Synced according to crossplane. In the condition i can see:
I don't know why terraform wants to destroy the instance here, as there are no changes to the object made from my side (unless the controller itself is patching it)
What environment did it happen in?
1.13.2
v0.36.0