aws-samples / aws-secure-environment-accelerator

The AWS Secure Environment Accelerator is a tool designed to help deploy and operate secure multi-account, multi-region AWS environments on an ongoing basis. The power of the solution is the configuration file which enables the completely automated deployment of customizable architectures within AWS without changing a single line of code.
Apache License 2.0
725 stars 233 forks source link

[FEATURE] Consider simplifying the stack #679

Closed rverma-dev closed 2 years ago

rverma-dev commented 3 years ago

This is generic feedback and we can keep this ticket open till we aggregate on the changes. We can collate some ideas here to improve the work experience on this in the first place.

There are few open questions wondering about the architecture

  1. Can we start the seed with codecommit or better github (different repo) in the first place? This can be aligned with the gitops philosophy as well and we can put all the other changes of management account within same. Also help in reducing the s3 buckets involved.

  2. Consider publishing a base PRE_BUILT_IMAGE to startwith.

  3. Subsequently consider using codeartifact to publish changes of accelerators constructs independently and add on to other constructs from independent repos at different phases.

  4. Can we simplify the dynamodb storage, if possible use a sort key too to breakdown the data further. For instance Outputs table output the account/0 this generate an array. For the same we can use partition key as account, and sortkey as management (id) and populate smaller outputs. Same applies for OutputTable (outputKey,outputvalue) and parameters table (accounts).

  5. Support Resume capability of failed stepfunction, atleast for master stack. https://aws.amazon.com/blogs/compute/resume-aws-step-functions-from-any-state/

Brian969 commented 3 years ago

1a) If I understand - this works today. Create your code commit repo per the defined naming manually, and we will use it, instead of pulling the original config file(s) in from S3. We don't document it as many users of the solution are not yet familiar with AWS and S3 file drop is easy for most non-technical users to follow.

1b) It has been suggested by at least one other person that we store the remainder of the customer config files in codecommit, instead of the Phase0-config bucket - under consideration. We would still want to keep the option for the customer to update them by placing new files in S3 customer input bucket.

2) We have considered and initially decided against this approach. We are re-exploring this option, along with a couple of other ideas given some of the challenges we've had around dependency management.

3) interesting idea.

4) Why? This is internal inter-workings and not seen by most. These variables have moved around over time as we struggled with the limits associated with Secrets, Parameters, S3, etc. (which may explain the current structure). What do you hope to gain from this recommendation?

rverma-dev commented 3 years ago

4th is basically coming around to facilitate smaller local deployments, which can be basically edit an output from dynamodb. I am trying to figure out how to implement a more robust local run, seems little tricky though :(

rverma-dev commented 3 years ago

@Brian969 If that is the case than adding a scp/iam-policy also should be able to do without any s3 modifications. I tried overriding a scp by committing in codecommit in scp directory but it wasn't ovveriden.

Infact a better thing to do here is commit back those default static resources to codecommit for manual ovverides and better tracking.

Brian969 commented 3 years ago

YES, if we did it we will do it for everything in that bucket: config-rules config firewall iam-policy rsyslog scp ssm-documents (except certs)

rverma-dev commented 3 years ago

@Brian969 another question, if what you says hold true then even if I remove my seed s3 bucket and seed kms key, the step machines should continue to work, even with the override /save outputs attributes

Brian969 commented 3 years ago

At this time the seed bucket is mandatory, even if you do not use it for anything (certs, firewall config, or seed config). At this time you can either put your seed config in the seed bucket, or prepopulate it in codecommit - we always use codecommit once it exists/is created and never look at s3 for that file again. No other file is currently supported in codecommit.

rverma-dev commented 3 years ago

Recently I tried to deploy a custom iam-policy by updating it directly in codecommit. It didn't worked and it complained for the policy not exists in bucket. Since you already mentioned no other files are supported in codecommit, i believed iam-policy and scps are not supported for now.

Brian969 commented 3 years ago

Correct - need to be placed in your customer input s3 bucket at this time. I realize not what you want, but, the internal Accel bucket these files are copied to does have versioning enabled to provide full history in the meantime.

rverma-dev commented 3 years ago

@Brian969 just wondering what is the process of passing/generating outputs while using development mode i.e. cdk.ts. Currently we are copying all the outputs to an output.json using aws dynamodb scan --table-name nsl-Outputs --select SPECIFIC_ATTRIBUTES --attributes-to-get outputValue | jq '.Items[].outputValue.S' -r | jq '.[]' | jq -s But if we want to run phase1 and phase2 both, then need to run phase1 first, then complete stepfunction till it stores phase1 output and then updated outputs using phase2.

Brian969 commented 2 years ago

Following up on the last comment - did you see PR#753?