aws / aws-cdk

The AWS Cloud Development Kit is a framework for defining cloud infrastructure in code
https://aws.amazon.com/cdk
Apache License 2.0
11.51k stars 3.85k forks source link

aws-codepipeline: Only supports KMS but not KMS_MANAGED #13027

Closed mmuller88 closed 3 years ago

mmuller88 commented 3 years ago

aws-codepipeline fails if I use the CdKPipeline with provisioning its own bucket and with active cross account and KMS_MANAGED. Those lines are the culprit:

   // if we have a cross-account action, the pipeline's bucket must have a KMS key
    // (otherwise we can't configure cross-account trust policies)
    if (action.isCrossAccount) {
      const artifactBucket = this.ensureReplicationResourcesExistFor(action).artifactBucket;
      if (!artifactBucket.encryptionKey) {
        throw new Error(
          `Artifact Bucket must have a KMS Key to add cross-account action '${action.actionProperties.actionName}' ` +
          `(pipeline account: '${renderEnvDimension(this.env.account)}', action account: '${renderEnvDimension(action.effectiveAccount)}'). ` +
          'Create Pipeline with \'crossAccountKeys: true\' (or pass an existing Bucket with a key)',

it states clear the we need to use KMS. But with using the property artifactBucket.encryptionKey you only can validate that you are using KMS with an external key. It doesn't support KMS_MANAGED this way. The underlying issue is might that the property encryptionKey is null when using KMS_MANAGED.

So I suggest to not letting it be null or changing the if statement + an optional read property for the bucket to get which encryption is used.

I created a draft PR: https://github.com/aws/aws-cdk/pull/13028 . If you like it I will continue with it and implement the new encryption property.

Reproduction Steps

const sourceBucket = new s3.Bucket(this, 'PipeBucket', {
      removalPolicy: core.RemovalPolicy.DESTROY,
      autoDeleteObjects: true,
      versioned: true,
      encryption: s3.BucketEncryption.KMS_MANAGED,
    });

    const pipeline = new Pipeline(this, 'Pipeline', {
      artifactBucket: sourceBucket,
      restartExecutionOnUpdate: true,
    });

    const sourceArtifact = new Artifact();
    const cloudAssemblyArtifact = new Artifact();

    const cdkPipeline = new CdkPipeline(this, 'CdkPipeline', {
      // The pipeline name
      // pipelineName: `${this.stackName}-pipeline`,
      cloudAssemblyArtifact,
      codePipeline: pipeline,
      // crossAccountKeys: true,

      // Where the source can be found
      sourceAction: repo,
      ...

What did you expect to happen?

No error

What actually happened?

cdk deploy throws "Artifact Bucket must have a KMS Key to add cross-account action ..."

Environment

Other


This is :bug: Bug Report

skinny85 commented 3 years ago

Hey @mmuller88 ,

can you confirm that cross-account CodePipelines actually work with KMS_MANAGED Buckets? I was under the impression that they do not...

github-actions[bot] commented 3 years ago

This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.

dobrynin commented 3 years ago

@skinny85 Reading the S3 documentation, it does seem that you have to use KMS.

Use a customer managed CMK if you want to grant cross-account access to your S3 objects. You can configure the policy of a customer managed CMK to allow access from another account. 

However, our team had implemented this prior to CDK throwing a warning, and it has been working just fine for us w/ an Asset Bucket using S3-managed keys and the following resource policy.

    artifactBucket.bucket.addToResourcePolicy(
      new PolicyStatement({
        actions: ['s3:*'],
        resources: [
          `arn:aws:s3:::${artifactBucket.bucket.bucketName}/*`,
          `arn:aws:s3:::${artifactBucket.bucket.bucketName}`,
        ],
        principals: [accountADeployRole, accountBDeployRole],
      }),
    );

Seems to me that S3_MANAGED should also be allowed for cross-account pipeline stages. If my understanding is in any way mistaken, would very much appreciate your input!

skinny85 commented 3 years ago

When you say "it was working" @dobrynin - what kind of Action were you running in the other account?

dobrynin commented 3 years ago

@skinny85 CloudFormationCreateReplaceChangeSetAction and CloudFormationExecuteChangeSetAction

skinny85 commented 3 years ago

Hmm, that's interesting.

Any way you can provide a minimal repro, so that I can confirm? I just want to make 100% sure it works before we remove this validation 🙂.

dobrynin commented 3 years ago

@skinny85 I've done my best. I am not the most experienced w/ CDK so I can't fully guarantee that the following is a valid reproduction, but it's my attempt at stripped down version of what our codebase currently has in production.

import * as codepipeline from "@aws-cdk/aws-codepipeline";
import * as cdk from "@aws-cdk/core";
import * as s3 from "@aws-cdk/aws-s3";
import * as codepipelineActions from "@aws-cdk/aws-codepipeline-actions";
import * as codebuild from "@aws-cdk/aws-codebuild";
import * as kms from "@aws-cdk/aws-kms";
import * as codecommit from "@aws-cdk/aws-codecommit";

import {
  PolicyStatement,
  Role,
  ManagedPolicy,
  ServicePrincipal,
  CompositePrincipal
} from "@aws-cdk/aws-iam";

export default class PipelineStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props: cdk.StackProps) {
    super(scope, id, props);

    const artifactBucket = new s3.Bucket(this, "ArtifactBucket", {
      bucketName: `-artifacts-${this.node.uniqueId.slice(-8)}`,
      encryption: s3.BucketEncryption.S3_MANAGED
    });

    const developmentDeploymentRole = new Role(
      this,
      "DevelopmentDeploymentRole",
      {
        roleName: "DevelopmentDeploymentRole",
        assumedBy: new CompositePrincipal(
          new ServicePrincipal("codepipeline.amazonaws.com"),
          new ServicePrincipal("cloudformation.amazonaws.com"),
          new ServicePrincipal("codedeploy.us-west-2.amazonaws.com")
        ),
        managedPolicies: [
          ManagedPolicy.fromAwsManagedPolicyName("AdministratorAccess")
        ]
      }
    );

    const productionDeploymentRole = Role.fromRoleArn(
      this,
      "ProductionDeploymentRole",
      `arn:aws:iam::FILL_IN_ACCOUNT_ID:role/ProductionDeployment`
    );

    artifactBucket.addToResourcePolicy(
      new PolicyStatement({
        actions: ["s3:*"],
        resources: [
          `arn:aws:s3:::${artifactBucket.bucketName}/*`,
          `arn:aws:s3:::${artifactBucket.bucketName}`
        ],
        principals: [developmentDeploymentRole, productionDeploymentRole]
      })
    );

    const pipeline = new codepipeline.Pipeline(this, "Pipeline", {
      artifactBucket,
      role: developmentDeploymentRole,
      pipelineName: "pipelineName"
    });

    const repository = new codecommit.Repository(this, id, {
      repositoryName: "repositoryName"
    });

    const sourceOutput = new codepipeline.Artifact();
    new codepipelineActions.CodeCommitSourceAction({
      actionName: "CodeCommit",
      repository,
      branch: "mainline",
      output: sourceOutput
    });

    const buildOutput = new codepipeline.Artifact("BuildOutput");

    const crossAccountEncryptKey = new kms.Key(this, "CrossAccountEncryptKey", {
      alias: "crossAccountEncryptKey"
    });
    crossAccountEncryptKey.addToResourcePolicy(
      new PolicyStatement({
        actions: ["kms:*"],
        resources: ["*"],
        principals: [developmentDeploymentRole, productionDeploymentRole]
      })
    );

    const buildProject = new codebuild.PipelineProject(
      this,
      "CdkBuildProject",
      {
        buildSpec: codebuild.BuildSpec.fromObject(
          "fill in build spec object" as any
        ),
        encryptionKey: crossAccountEncryptKey,
        projectName: "projectName"
      }
    );

    pipeline.addStage({
      stageName: "Build",
      actions: [
        new codepipelineActions.CodeBuildAction({
          actionName: "CodeBuild",
          role: developmentDeploymentRole,
          project: buildProject,
          input: sourceOutput,
          outputs: [buildOutput]
        })
      ]
    });

    const stackName = "stackName";
    const changeSetName = "changeSetName";

    pipeline.addStage({
      stageName: "Deploy-Prod",
      actions: [
        new codepipelineActions.CloudFormationCreateReplaceChangeSetAction({
          actionName: "CreateChangeSet-Prod",
          stackName,
          changeSetName,
          deploymentRole: productionDeploymentRole,
          role: productionDeploymentRole,
          adminPermissions: true,
          templatePath: buildOutput.atPath(`FILL_IN_FILE_PATH.template.json`),
          // N.B. We use the following in our code but I'm not sure if they are necessary for the reproduction of the issue.
          // parameterOverrides: {
          //   AssetBucket: buildOutput.bucketName
          // },
          // extraInputs: [buildOutput],
          runOrder: 1
        }),
        new codepipelineActions.CloudFormationExecuteChangeSetAction({
          actionName: "ExcuteChangeSet-Prod",
          stackName,
          changeSetName,
          role: productionDeploymentRole,
          runOrder: 2
        })
      ]
    });
  }
}
skinny85 commented 3 years ago

Oh, I think I see what you're doing here.

You have a KMS key, but it's only on the build Project - it's not used anywhere else in the Pipeline!

Can I ask: why this setup? Why not just have the Key on the artifact Bucket itself? What's the advantage of this solution?

dobrynin commented 3 years ago

@skinny85 technically it is also used in our Test stage of the pipeline (I omitted it as it wasn't immediately relevant for the example)

As far as what the benefit is, I think it's mainly just that we have a custom S3 CDK Construct that has S3_MANAGED hard coded for the encryption field. It would be easy for us to parameterize this and make it use the cross-account KMS key, but seeing as it seems to work w/ the current configuration, I think the CDK error is overly restrictive.

dobrynin commented 3 years ago

Note: I simplified the S3 usage in the code sample above^. In reality, it is actually using a custom Construct as I explained in my last comment.

skinny85 commented 3 years ago

Can't you just make the custom S3 resource use the same crossAccountEncryptKey as you have your CodeBuild Project use now?

In my mind, this is actually a pretty fragile solution, and IMO rests on a behavior of CodeBuild that can be considered a bug (in my opinion, when running in a CodePipeline, CodeBuild should not use its Project-set KMS Key, but instead the Key set in the Pipeline itself - if that is not set, then CodeBuild should not use a Key to encrypt its artifacts either).

dobrynin commented 3 years ago

I suppose we can, but since currently the Artifact Bucket is defined w/ S3_MANAGED, doesn't it prove the point that Artifact Bucket must have a KMS Key to add cross-account action is inaccurate?

dobrynin commented 3 years ago

So @skinny85 is right here. Even though we set S3_MANAGED directly on the bucket, because we provide an encryption key to CodeBuild, the CodeBuild artifacts are actually KMS encryped (not S3_MANAGED). Kind of surprising behavior, not sure if it's documented anywhere, but it makes sense.

It's true that S3-Managed key policy on the overall bucket does work if the cross-account objects themselves are KMS encrypted (so maybe the erroring logic, or at least the message, can be improved) but otherwise the workaround is to just use KMS as the top-level bucket policy.

touzoku commented 2 years ago

I was using code like this and it was simply working until now (definitely was on cdk 1.98.0):

new codepipeline.Pipeline(this, 'CodePipeline', {
  artifactBucket: new s3.Bucket(this, 'CodePipelineArtifactsBucket', {
  encryption: s3.BucketEncryption.UNENCRYPTED, // to prevent creating KMS keys (they cost $1/month!)
}),

The build or deploy roles were not granted with any additional permissions, cdk would have it figured automatically.

Now on cdk 2.0.0-rc.21 my build phase fails with:

[Container] 2021/10/04 08:03:14 Waiting for DOWNLOAD_SOURCE
--
AccessDenied: Access Denied
status code: 403, request id: B2V9K57HYJAKP1JZ, host id: UQNQhgJPc5J+lX2/guzey81ouyg5v3GbH35k79OSAF4MZ7WTq9cbkyzmEFxqc0S5qzi6ePKjUB0= for primary source and source version arn:aws:s3:::example-cicd-codepipelineartifactsbucket4abb56c-1jal79ilgnbe5/Example-CICD-CodeP/Artifact_S/KaZ9ms3

Any ideas how to fix this?

skinny85 commented 2 years ago

@touzoku I wonder whether this could be related to the @aws-cdk/aws-s3:grantWriteWithoutAcl feature flag, which basically switches the s3:PutObject* permission usually used for Bucket.grantWrite() to s3:PutObject, and which is turned on by default in V2.

Can you see if this fixes it? Create an explicit Role for your build Action:

const buildActionRole = new iam.Role(this, 'BuildActionRole', {
  assumedBy: new iam.AccountRootPrincipal(),
});

Grant this Role explicit s3:PutObject* permissions:

const artifactBucket = new s3.Bucket(this, 'CodePipelineArtifactsBucket', {
  encryption: s3.BucketEncryption.UNENCRYPTED, // to prevent creating KMS keys (they cost $1/month!)
});
new codepipeline.Pipeline(this, 'CodePipeline', {
  artifactBucket: artifactBucket,
});
artifactBucket.grant(buildActionRole, 's3:PutObject*');

And finally, pass buildActionRole to your build step Action:

new codepipeline_actions.CodeBuildAction({
  // ...
  role: buildActionRole,
});

And let me know if that fixes your problem?

Thanks, Adam

touzoku commented 2 years ago

@skinny85 Adding if (build.role) artifactBucket.grantReadWrite(build.role) has fixed it! Thank you for the tip.

const artifactBucket = new s3.Bucket(this, 'CodePipelineArtifactsBucket', {
  encryption: s3.BucketEncryption.KMS_MANAGED, // to prevent creating KMS keys (they cost money)
  removalPolicy: RemovalPolicy.DESTROY,
})
const build = new codebuild.PipelineProject(this, 'CodeBuildProject', { /* props */ })
build.addToRolePolicy(
  new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ['s3:PutObject*'],
    resources: [artifactBucket.arnForObjects('*')],
  })
)
new codepipeline.Pipeline(this, 'CodePipeline', {
  artifactBucket,
  // .... other props
{
  stageName: 'Build',
  actions: [
    new actions.CodeBuildAction({
      actionName: 'CodeBuild',
      project: build,
      input: sourceOutput,
      outputs: [buildOutput],
    }),
  ],
},
}

The code above works in 2.0.0-rc.21.

Maybe cdk should default to KMS managed keys unless specified otherwise? Saves $1/month, makes @QuinnyPig happy

QuinnyPig commented 2 years ago

So few things make me happy, but this would indeed please me.

skinny85 commented 2 years ago

Sorry, I don't understand 😜.

You said:

@skinny85 Adding if (build.role) artifactBucket.grantReadWrite(build.role) has fixed it! Thank you for the tip.

But then the code that you have that you said works does not have if (build.role) artifactBucket.grantReadWrite(build.role) anywhere:

const artifactBucket = new s3.Bucket(this, 'CodePipelineArtifactsBucket', {
  encryption: s3.BucketEncryption.KMS_MANAGED, // to prevent creating KMS keys (they cost money)
  removalPolicy: RemovalPolicy.DESTROY,
})
const build = new codebuild.PipelineProject(this, 'CodeBuildProject', { /* props */ })
build.addToRolePolicy(
  new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ['s3:PutObject*'],
    resources: [artifactBucket.arnForObjects('*')],
  })
)
new codepipeline.Pipeline(this, 'CodePipeline', {
  artifactBucket,
  // .... other props
{
  stageName: 'Build',
  actions: [
    new actions.CodeBuildAction({
      actionName: 'CodeBuild',
      project: build,
      input: sourceOutput,
      outputs: [buildOutput],
    }),
  ],
},
}

So.. which one is it? 😜