Open eddies opened 5 years ago
There are many ways to go about implementing a CI/CD pipeline using Pulumi, and we should definitely provide more documentation, guidance and examples for how to do it. (Though we don't want to be too opinionated since each team has different needs.)
Anyways, here's a standard setup for having a separate testing
, staging
, and production
environment using a prototypical "push-to-deploy" strategy. (e.g. merging a pull request updates the code.)
First, you would create three separate stacks. One for each environment. This would also create three separate Pulumi.<stackname>.yaml
files that would be checked into your source code. For example:
infrastructure/
├── Pulumi.yaml
├── Pulumi.testing.yaml
├── Pulumi.staging.yaml
├── Pulumi.production.yaml
├── ...
Those stack-specific .yaml
files will contain any environment-specific configuration values checked into your source code. (For example, different values for gcp:project
, or different secrets providers and encryption keys.) So you shouldn't see any merge conflicts for configuration values differing between, say, testing
and production
, because they are isolated in their different Pulumi.<stack-name>.yaml
files.
And then, you could update staging
by just creating a PR from master -> staging
. And then staging -> production
. (Or some other branch strategy that makes more sense for you and your team.)
You then just need to have the right CI/CD scripts determine which stack to select and deploy based on the current branch. For example:
make build
make test
# Deploy
cd infrastructure
switch (current git branch) {
case "testing":
plum stack select robot-co/testing
case "staging":
pulumi stack select robot-co/testing
case "production":
pulumi stack select robot-co/production
default:
echo "Skipping deployment for branch not tied to a specific stack"
}
pulumi stack update
You'll want to modify your scripts to also use the right credentials as well, since you likely will have your production
environment running in a different account/project than your testing
one. But that's the general setup.
You can see our documentation for Continuous Delivery for a little more information. But if there is anything that isn't clear, or we could add to improve the docs please let me know.
How does one do this in a trunk based deployment flow where people avoid branches per environment? For kubectl, I solve this with folders and a patch per cluster. For pulumi does this mean I stop using stacks entirely since pulumi forces changed inside main.py to be visible to all stacks? I can't do something like "stack staging uses version 1.2 and stack prod uses version 1.1" inside stacks?
Is it in scope for this project to also describe dev, staging, and prod workflows? For the moment, (I believe) this repo suggests only a PR -> prod workflow.
Assuming the current repo structure (under
gcp/
):I'm puzzled about what "the Prod Way" of supporting a workflow with independent dev, staging and prod environments would look like.
Assuming a branch per environment, e.g.
dev
,staging
,prod
, then thePulumi.<stack-name>.yaml
files will (should) differ in at leastsecretsprovider
andencryptedkey
across the three branches (at least for self-managed backends) and possibly also forconfig/gcp:project
. So there will always be merge conflicts when promoting features from one environment's branch to another, which opens the door for human error at exactly the points where you want to be reducing the potential for human error (i.e. when graduating from dev to staging or from staging to prod).I quite liked how identity bootstrapping was described as a series of actions in https://github.com/pulumi/kubernetes-the-prod-way/issues/13#issue-401658222. If we could come up with an analagous description here to at least spec out a future PR, that would be a pretty great outcome.