dougmoscrop / serverless-plugin-split-stacks

A plugin to generate nested stacks to get around CloudFormation resource/parameter/output limits
297 stars 68 forks source link

S3 - Access Denied #130

Open kedarnag138 opened 4 years ago

kedarnag138 commented 4 years ago

Here's my scenario, before I started using this plugin I was using serverless-nested-stack and since it was not supported, I moved on to this one. However, now that I have this set up for different environments (dev, staging, UAT, production). I deleted the stack on the dev environment and deployed it as a fresh copy and everything worked smoothly, but on my staging I cannot delete the stack and I'm trying to deploy and I get the following error

Screenshot 2020-09-07 at 10 33 05 PM

I'm currently stuck at this, tried various other options, but for staging and the rest of the environments, I will not be able to delete the stack and re-deploy it. Is there a workaround for doing this?

Appreciate the help. Thanks!

dougmoscrop commented 4 years ago

You can't move CloudFormation resources around through stacks very easily - it used to be impossible, now AWS kinda supports it but I had not had the time to update this plugin to try to enable it.

Because of that, this plugin tries to detect already migrated resources and leave them alone, even if it's current configuration might desire having the resource in a different stack. So for example, whatever is in your LogStack might now be targetting a FooStack, and all new resources would go there, but the existing ones have to stay, unless as you saw, you teardown and redeploy. You can see this kinda here: https://github.com/dougmoscrop/serverless-plugin-split-stacks/blob/master/lib/migrate-existing-resources.js

So I am not sure if the error you're seeing is unrelated - you should go in to the LogStack itself and look at the events and see - that S3 Error is just bubbling up I think.

All that said, if your hope was this plugin would reconfigure your stacks, that isn't possible right now :(

So I think first step is just to look inside LogStack and see what really went wrong.