crossplane-contrib / provider-upjet-aws

Official AWS Provider for Crossplane by Upbound.
https://marketplace.upbound.io/providers/upbound/provider-aws
Apache License 2.0
144 stars 121 forks source link

Updating taskDefinition(ECS) falls into ReconcileError #823

Closed mhoshi-vm closed 1 month ago

mhoshi-vm commented 1 year ago

What happened?

I am getting the following error when I update the spec.forProvider.containerDefintions section after initial deploy.

  - lastTransitionTime: "2023-08-11T12:45:46Z"
    message: 'observe failed: cannot run plan: plan failed: Instance cannot be destroyed:
      Resource aws_ecs_task_definition.test2-ecs-ns1 has lifecycle.prevent_destroy
      set, but the plan calls for this resource to be destroyed. To avoid this error
      and continue with the plan, either disable lifecycle.prevent_destroy or reduce
      the scope of the plan using the -target flag.'
    reason: ReconcileError
    status: "False"
    type: Synced

Here is my entire manifest

apiVersion: ecs.aws.upbound.io/v1beta1
kind: TaskDefinition
metadata:
  labels:
    app.kubernetes.io/part-of: ecs-test
    apps.tanzu.vmware.com/has-tests: "true"
    apps.tanzu.vmware.com/workload-type: ecs
    app.kubernetes.io/component: run
    carto.run/workload-name: test2
  name: test2-ecs-ns1
  annotations:
    boot.spring.io/version: 3.1.2
    conventions.carto.run/applied-conventions: |-
      appliveview-sample/app-live-view-appflavour-check
      spring-boot-convention/auto-configure-actuators-check
      spring-boot-convention/spring-boot
      spring-boot-convention/spring-boot-graceful-shutdown
      spring-boot-convention/spring-boot-web
      spring-boot-convention/spring-boot-actuator
      spring-boot-convention/spring-boot-actuator-probes
      spring-boot-convention/app-live-view-appflavour-check
      spring-boot-convention/app-live-view-connector-boot
      spring-boot-convention/app-live-view-appflavours-boot
    developer.conventions/target-containers: workload
spec:
  deletionPolicy: Delete
  providerConfigRef:
    name: aws-provider
  forProvider:
    containerDefinitions: |-
      [
         {
            "cpu": 0,
            "environment": [
               {
                  "name": "JAVA_TOOL_OPTIONS",
                  "value": "-Dmanagement.endpoint.health.probes.add-additional-paths=\"true\" -Dmanagement.health.probes.enabled=\"true\" -Dserver.port=\"8080\" -Dserver.shutdown.grace-period=\"24s\""
               }
            ],
            "essential": true,
            "image": "ghcr.io/mhoshi-vm/tap/workloads/test2-ecs-ns1@sha256:d9c14cb82c2b5c838a7d8e3123fe4cc9fd3727c7f0c3505dcae0510361352a43",
            "mountPoints": [],
            "name": "workload",
            "portMappings": [
               {
                  "containerPort": 8080,
                  "hostPort": 8080,
                  "protocol": "tcp"
               }
            ],
            "user": "1000",
            "volumesFrom": []
         }
      ]
    family: test2-ecs-ns1
    region: us-west-2
    cpu: "512"
    memory: "1024"
    networkMode: awsvpc
    skipDestory: true
    requiresCompatibilities:
    - FARGATE
  initProvider: {}
  managementPolicies:
  - '*'

How can we reproduce it?

Deploy above and modify anywhere in containerDefinitions such as enviroments.

What environment did it happen in?

mhoshi-vm commented 1 year ago

btw i realized i miss spelled skipDestroy to skipDestory in the above example. However even fixing that i still see the issue

turkenf commented 1 year ago

Hi @mhoshi-vm,

Thank you for raising this issue, it can be reproduced with provider-aws v0.39.0:

  - lastTransitionTime: "2023-08-28T18:10:08Z"
    message: 'observe failed: cannot run plan: plan failed: Instance cannot be destroyed:
      Resource aws_ecs_task_definition.sample-taskft has lifecycle.prevent_destroy
      set, but the plan calls for this resource to be destroyed. To avoid this error
      and continue with the plan, either disable lifecycle.prevent_destroy or reduce
      the scope of the plan using the -target flag.'
    reason: ReconcileError
github-actions[bot] commented 6 months ago

This provider repo does not have enough maintainers to address every issue. Since there has been no activity in the last 90 days it is now marked as stale. It will be closed in 14 days if no further activity occurs. Leaving a comment starting with /fresh will mark this issue as not stale.

blim747 commented 4 months ago

I'll mention my team and I were getting a similar error with an RDS resource, upgrading from 0.38.0 to 0.47.4 resolved it for us

Linking similar issue https://github.com/crossplane-contrib/provider-upjet-aws/issues/620

github-actions[bot] commented 1 month ago

This provider repo does not have enough maintainers to address every issue. Since there has been no activity in the last 90 days it is now marked as stale. It will be closed in 14 days if no further activity occurs. Leaving a comment starting with /fresh will mark this issue as not stale.

github-actions[bot] commented 1 month ago

This issue is being closed since there has been no activity for 14 days since marking it as stale. If you still need help, feel free to comment or reopen the issue!