serverless / serverless-meta-sync

Secure syncing of serverless project meta data across teams
44 stars 14 forks source link

V0 Goals #1

Open austencollins opened 8 years ago

austencollins commented 8 years ago

The workflow of larger teams vary, but the general problem teams are running into is this:

• Most developers in a team, working on the same project, will have their own stage which they use while developing. When a new developer joins, they git clone the serverless project, run project init, then create their own stage. • However, some project stages are shared across developers and CI/CD systems (e.g., test, beta, production). • For the developers and CI/CD systems who are sharing stages (e.g., test, beta, production), they need a secure way to sync/pull the meta data for those shared stages, outside of their version control system (e.g., Git).

We need a Serverless Plugin that allows them to do that. It should simply sync the variable files in the _meta/variables folder with copies located on an S3 bucket. Here's how it should work in more detail:

"meta": {
   "name": , // Name of the S3 bucket that holds the meta data.  Project variables can be used here.
   "region": // The region the S3 bucket resides in.  Project variables can be used here.
}
erikerikson commented 8 years ago

maybe MetaSync without options, should sync the s-variables-common.json file. MetaSync with a stage option, should sync the s-variables-common.json and s-variables-stage.json files. MetaSync with a stage and region option, should sync the s-variables-common.json, s-variables-stage.json, and s-variables-stage-region.json files.

after all, the stage scoped variables include the s-variables-common.json defined variables, et cetera. Not sure I've fully thought that through.

erikerikson commented 8 years ago

Our planned but unimplemented CI/CD strategy is to have a stage per branch rather than per developer. We see the branch as the macro unit of testing need.

As for what to do about sensitive information, we are currently taking the position that the variable should always remain insensitive. After all, although not in a source control system, it will be on multiple disks as a result of Amazon's handling so it would have to be encrypted to be in the environment securely. We accomplish this by placing only a pointer to the sensitive asset in our _meta configuration files. This has advantages like allowing _meta/resources and _meta/variables to be checked in. Another important advantage for the paranoid among us is providing a control plane for allowing engineers to use dev/test certificates but not the production ones. This leads to an easily solvable potential race condition to ensure loading completion (of course) but the many advantages outweigh that small amount of complexity and additional cold start latency.

To be completely honest, we're still working through the "what is driven by SLS as opposed to our scripts?" question. For example, trying to plug CI/CD into SLS seems like folly as our CI/CD is the orchestrator that knows how to direct SLS. Trying to make SLS the tool for everything seems to be asking for trouble, at best, for at least the short term. That said, I believe there will be teams that will want everything driven by SLS and this use case seems a valid choice for an organization, I'm just not sure of the priority of this as a feature.

austencollins commented 8 years ago

@erikerikson Please check out the new demo of this plugin and we can discuss all of this tonight. Thanks as always for your input :)