Open srethira opened 4 years ago
This is true.
@rix0rrr Is this current limitation? Or Is it designed that way? Will this be fixed in the future?
As long as this behavior is in place (I vote for remediation), what do you think could be a "sense making" workaround? Track all application stacks (for instance by tag) and delete them by a wrapping application/shell script? I thought to select all application-stacks via Tag and delete via aws cli command from a shell script. Is there a more "beautiful" way?
Facing similar issue stacks created from pipeline is not getting deleted while deleting the PipelineStack. Even if we try to delete the 'Created Stack' using cdk it is not deleting.
Ex: PipelineStack > CDK build generates CFN Template(Stack 2) > Deploy CFN (Stack 2)
cdk destroy PipelineStack - This deletes only Pipeline cdk destroy Stack 2 - This says Stack 2 is not found
Could you please suggest best practice for clearing the resources for the above use case ?
@rix0rrr and @SomayaB why this is not a bug and guidance?
Hey @srethira,
This is a limitation of the current design. At the moment the only way of deleting the created stack is through the Cloudformation console. I tend to agree this is less than optimal, but it is not a bug (that said, I am not sure if it is feasible).
So for now, I have marked it as a feature-request pending further review by one of the devs working on pipelines.
😸 😷
Apologies for my flippant response before. cdk destroy
will only destroy single stacks, or related stacks that depend on the destroyed stack. Since the application stacks don't depend on the pipeline stack, they won't be destroyed.
We could think of a way to try to encode that, but at the moment all stacks deployed through the pipeline are independent and need to be deleted by hand.
My workaround is to add a flag (and a bit of code) to app.py
. When I set the flag, the added code will return the necessary info to make additional cdk
CLI calls to delete the app stacks. I dump the info in JSON on STDERR for easier capture from a wrapper script.
Basic example:
app.synth()
.
.
.
def dump_app_stack_data(user_app: Construct):
"""Dumps information about application stacks to STDERR
CDK pipelines currently doesn't support deleting application stacks created by a pipeline automatically. This
function reports the necessary information in JSON form so an external script can then iterate over and delete
the application stacks.
Typical usage example:
my_app = MyApplication(self, "Prod",
env=Environment(
account="123456789012",
region="eu-west-1"
)
)
if some_flag:
dump_app_stack_data(my_app)
Args:
user_app: user application containing stacks to remove
"""
app_stack_data = {
"outdir": user_app.outdir,
# tear down stacks in reverse order from create to ensure correct dependency order removal
"stacks": [stack.node.unique_id for stack in reversed(user_app.node.children)],
}
print(json.dumps(app_stack_data), file=sys.stderr)
# handle request to dump application stack info
if app.node.try_get_context('@flags/app:dump_app_stack_data'):
dump_app_stack_data(global_control_plane_app)
In my cleanup pipeline I run this script before cdk destroy
:
#!/usr/bin/env bash
# This script demonstrates using the `dump_app_stack_data` custom feature/flag to cleanup application stacks created by
# the pipeline stack. After running this script it's safe to delete the pipeline stack itself.
#
# CDK Pipelines doesn't natively support this cleanup at the time of this writing.
#
# Any additional arguments to this script are passed to `cdk destroy` as-is
# e.g. `./cleanup_app_stacks.sh --force` to disable confirmation prompt
# run cdk and discard STDOUT to /dev/null (capturing just the JSON output containing application stack info from STDERR)
app_stack_data=$(cdk ls --context "@flags/app:dump_app_stack_data"=true "$@" 2>&1 1>/dev/null)
echo "retrieved app stack data: ${app_stack_data}"
# extract the outdir (path to the cloud assembly for the application stacks; needed for cdk to operate on them)
outdir=$(echo "$app_stack_data" | python -c 'import sys, json; print(json.load(sys.stdin)["outdir"])')
echo "parsed outdir: ${outdir}"
# extract the list of stacks to destroy (in a bash-friendly format...space-separated)
stacks=$(echo "$app_stack_data" | python -c 'import sys, json; print(*json.load(sys.stdin)["stacks"], sep = " ")')
echo "parsed stacks: ${stacks}"
# iterate over the stacks calling `cdk destroy` for each (passing the assembly dir so cdk can find stack details)
echo "Destroying application stacks..."
for stack in ${stacks[*]}; do
echo "- ${stack}"
cdk destroy "$@" -a "${outdir}" "${stack}"
done
Obtain app stack data by passing config flag.
▸ cdk ls --context "@flags/app:dump_app_stack_data"=true
{
"outdir": "cdk.out/assembly-my-pipeline",
"stacks": [
"myappstackone844C2CC7",
"myappstacktwo29580319",
"myappstackthree30EBED8A"
]
}
my-pipeline
Use this info to tell cdk
to operate on these stacks.
▸ cdk -a cdk.out/assembly-my-pipeline ls
myappstackone844C2CC7
myappstacktwo29580319
myappstackthree30EBED8A
From here you can destroy them as usual. Hope this helps! 😁
@rix0rrr Here is my thought on this issue: If we can pass an additional parameter to destroy command like "delete-app-stack true" then somehow if we can override stage with empty stacks, cdk automatically deletes those app stacks and finally run destroy on the pipeline itself.
@rix0rrr Is this not planned anymore?
Thoughts..
(1)Could let Cloudformation do the thing it does well. Manage resources.. instead of having codepipeline deploy the stacks, let cloudformation do it, in nested stacks.. You'd only have one stack per stage... This might have all sorts of other side effects that may be very undesirble.
(2) A bit like we cache context.cdk.json we could cache the structure of the pipeline.. ( at least the stages that are deploying thigns ).. The build could work out the difference between what had been deployed last time, and what is going to be deployed this time.. and delete the ones that no longer need to be there..
thanks @jguice for your example. here is something I had to implement with the latest stacks in typescript
const out = app.synth();
if (app.node.tryGetContext('@flags/app:dump_app_stack_data')) {
const appStackData = {
outdir: app.outdir,
stacks: out.stacksRecursively.reverse().map((stack) => stack.stackName)
};
console.error(JSON.stringify(appStackData));
}
Listing stacks as parameters to destroy is a pretty simple workaround:
#!/bin/bash
STACKS="$(cdk list | tr '\n' ' ')"
cdk destroy ${STACKS}
I found this unique behavior about aws cdk pipeline
not deleting the stacks that are built with its pipeline. I find it non-intuitive that we don't have option to delete all the stacks created alongside the infrastructure.
Thanks @ssaarela for that workaround!
However I think it is still necessary to have the option to destroy either all stacks or specific stacks (pipeline only or infrastructure & app only). This will provide seamless workflow to do everything from CLI.
Listing stacks as parameters to destroy is a pretty simple workaround:
#!/bin/bash STACKS="$(cdk list | tr '\n' ' ')" cdk destroy ${STACKS}
Thank for the work around but this does not work in cross-account. For example, i have account A (which is dev) , and a target account B(which i do not have admin right). The cdk synth
to AccountB is made by a accountA:pipelinestack-role. When i destroy pipeline in accountA , the whole stacks in accountB still exists which does not make sense
Looks like there are some workarounds but they are not ideal. It would be great to have an automated option to delete stacks that are no longer part of a pipeline.
This issue has received a significant amount of attention so we are automatically upgrading its priority. A member of the community will see the re-prioritization and provide an update on the issue.
Is there any plan to resolve this issue through aws cdk cli ?
Headed into 2024, has there been any forward momentum on this? Love using CDK for infra, love self-mutating pipelines, hate the idea of doing manual work arounds to clean up the mess cdk destroy --all
leaves behind.
cdk destroy ${STACKS}
The workaround from @ssaarela is very useful, but it would only work if you keep the same name for the stacks. If at any point you change the name of any of the application's stacks, this would not work. It would be ideal to be able to create the stacks as sub-stacks or have some kind of tracking, so old stacks are deleted automatically.
Also note that once the pipeline stack is deleted via cdk delete, often the cloudformation role needed to manage and delete the child stacks are also deleted. Meaning that you have to recreate that cloudformation role to delete the child stack.
Following this example here
The above command creates a pipeline stack and then the pipeline creates an application stack.
Only delete the pipeline stack but not the application stack created earlier.