Open stevendborrelli opened 4 months ago
This provider repo does not have enough maintainers to address every issue. Since there has been no activity in the last 90 days it is now marked as stale
. It will be closed in 14 days if no further activity occurs. Leaving a comment starting with /fresh
will mark this issue as not stale.
/fresh
Is there an existing issue for this?
Affected Resource(s)
eks.aws.upbound.io - Cluster
Resource MRs required to reproduce the bug
Use configuration-aws-eks along with a cluster-xr to create an eks cluster:
configuration.yaml
```yaml apiVersion: pkg.crossplane.io/v1 kind: Configuration metadata: name: configuration-aws-eks spec: package: xpkg.upbound.io/upbound/configuration-aws-eks:v0.9.0 ```xr.yaml
```yaml apiVersion: aws.platform.upbound.io/v1alpha1 kind: XNetwork metadata: name: configuration-aws-eks spec: parameters: id: configuration-aws-eks region: us-west-2 --- apiVersion: aws.platform.upbound.io/v1alpha1 kind: XEKS metadata: name: configuration-aws-eks spec: parameters: deletionPolicy: Delete providerConfigName: default id: configuration-aws-eks region: us-west-2 version: "1.27" iam: # replace with your custom arn like: # roleArn: arn:aws:iam::123456789:role/AWSReservedSSO_AdministratorAccess_d703c73ed340fde7 roleArn: "arn:aws:iam::609897127049:user/steven" nodes: count: 1 instanceType: t3.small writeConnectionSecretToRef: name: configuration-aws-eks-kubeconfig namespace: upbound-system ```Steps to Reproduce
What happened?
Once an updated XR manifest is applied with version 1.28, the upgrade immediately starts.
spec.forProvider.version
is at 1.28 whilespec.AtProvider.version
is at 1.27 during the cluster upgrade, but the conditions don't show it is in an upgrading state. The k8s api server is available during the upgrade, so the cluster is not in an unready state during upgrade.Relevant Error Output Snippet