Closed mhoshi-vm closed 1 month ago
btw i realized i miss spelled skipDestroy
to skipDestory
in the above example.
However even fixing that i still see the issue
Hi @mhoshi-vm,
Thank you for raising this issue, it can be reproduced with provider-aws v0.39.0:
- lastTransitionTime: "2023-08-28T18:10:08Z"
message: 'observe failed: cannot run plan: plan failed: Instance cannot be destroyed:
Resource aws_ecs_task_definition.sample-taskft has lifecycle.prevent_destroy
set, but the plan calls for this resource to be destroyed. To avoid this error
and continue with the plan, either disable lifecycle.prevent_destroy or reduce
the scope of the plan using the -target flag.'
reason: ReconcileError
This provider repo does not have enough maintainers to address every issue. Since there has been no activity in the last 90 days it is now marked as stale
. It will be closed in 14 days if no further activity occurs. Leaving a comment starting with /fresh
will mark this issue as not stale.
I'll mention my team and I were getting a similar error with an RDS resource, upgrading from 0.38.0 to 0.47.4 resolved it for us
Linking similar issue https://github.com/crossplane-contrib/provider-upjet-aws/issues/620
This provider repo does not have enough maintainers to address every issue. Since there has been no activity in the last 90 days it is now marked as stale
. It will be closed in 14 days if no further activity occurs. Leaving a comment starting with /fresh
will mark this issue as not stale.
This issue is being closed since there has been no activity for 14 days since marking it as stale
. If you still need help, feel free to comment or reopen the issue!
What happened?
I am getting the following error when I update the
spec.forProvider.containerDefintions
section after initial deploy.Here is my entire manifest
How can we reproduce it?
Deploy above and modify anywhere in
containerDefinitions
such as enviroments.What environment did it happen in?