appveyor / ci

AppVeyor community support repository
https://www.appveyor.com
344 stars 65 forks source link

AWS CodeDeploy #633

Open FeodorFitsner opened 8 years ago

FeodorFitsner commented 8 years ago

Resources: http://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-windows.html http://docs.aws.amazon.com/sdkfornet/v3/apidocs/Index.html

spyoungtech commented 6 years ago

For anyone interested in this: I've rigged up a workaround for this using the an AWS S3 Environment provider, combined with an AWS lambda function that creates a CodeDeploy deployment via the S3 write event.

A basic example works as follows:

Your appeyor.yml may contain something like this.

environment:
    application_name: "myapp"
    deployment_group: "myDG"

deploy:
after_build: 
  # package zip for codedeploy
  - 7z a myapp-codedeploy.zip path\to\assets\*.dll -ir!.\path\to\appspec.yml

artifacts:
  - path: myapp-codedeploy.zip
    name: CodeDeployBundle

deploy:
  provider: S3
  access_key_id: <your id>
  secret_access_key: <your key>
  bucket: "mybucketname"
  region: eu-east-1
  # folder path sections used by the lambda function to obtain codedeploy parameters
  folder: $(application_name)/$(deployment_group)/$(appveyor_build_version)
  artifact: CodeDeployBundle

Now when this deployment environment is triggered, it should write to S3 which, in turn, will trigger the Lambda function that kicks off the CodeDeploy deployment.

Instead of using an inline deploy, you can also create a 'CodeDeploy' environment that uses $(artifact_name) in the artifact field. Then your deploy section may look like this

deploy:
  - provider: Environment
    name: CodeDeploy
    application_name: "myapp"
    deployment_group: "myDG"
    artifact_name: "CodeDeployBundle"

I like this because it's a bit cleaner and more readable in our projects.

IlyaFinkelshteyn commented 6 years ago

@spyoungtech great, thanks for sharing!

Question though: what it motivation to use Environment deployment and not inline S3 deployment? Difference here. Main idea behind Environments is to decouple deployment from the build. I guess you do not like it to be called S3 while it is actually CodeDeploy?

Note that Environment deployment is asynchronous and if it failed, build will not failed. However Environment deployment has it's own notification settings.

spyoungtech commented 6 years ago

Yeah, the outcome of the deployment vs. the build status is definitely a drawback. There also some other drawbacks of the Lambda, compared to writing scripts around AWS CLI (for example, the deployment ID never gets to the build log).

Initially, we did configure an inline S3 deployment. The naming of S3 provider for what ultimately is CodeDeploy was a factor, but not a deal-breaker.

The biggest concern I had with inline S3 deployment is that we did not want to expose AWS credentials in the YAML. Even though the credentials can be encrypted to a secure string, the concern was that the same secure string could be used in another project.

For example this is not so much a concern with Travis, because they use a different private key for every project/repo. With Appveyor, my understanding is that the same secure string is valid at least account-wide. Because our team members will pretty much have unrestricted access to our Github repos and Appveyor projects, that posed a problem for the management of those credentials, even if they were secure strings (think misuse/accidents by members using the string in other appveyor projects).

With an Environment, my understanding is we will be able to give all team members full access to the build projects, but limit their access to the configuration of environments where the keys are held. Plus, Lambda holds the IAM access for CodeDeploy; we only need give S3 write to Appveyor in this setup. We don't have too many reservations about holding secrets in appveyor, but it still feels better to trust less, if that makes sense.

There's some other small wins there; like there being less fields to worry about and enforcing the S3 organization by the codedeploy app/DG names.

IlyaFinkelshteyn commented 6 years ago

@spyoungtech You can set those Environment variables in UI. Enviroment variables from UI are being merged with those in YAML. Then use AppVeyor permissions to certain roles to see that project. Or you can use Deny all approach so to explicitly white-list roles at project level so people can see specific projects.

P.S. I believe you do not expect your team members to modify YAML to print or email variables value. This can be done regardless if encryption key is account- or project-wide as long as people have access to repos. If you expect that can happen, then move all to the UI. Or use remote YAML location where it cannot be modified by everyone.

spyoungtech commented 6 years ago

Thanks @IlyaFinkelshteyn we'll definitely have to give some more thought into these configuration options. Also edited my above comment to indicate an inline S3 deployment can be used.

At first, I thought it would be convenient for us to have very lax permissions around project configuration. Although, in hindsight, I guess there's not a ton of reasons why one would need to configure a project in the UI if the project uses a yml file anyhow, so restricting project configuration permissions shouldn't have major impacts on our team members.

That said, restricting project permissions and merging environment variables with the UI seems like a good option for handling sensitive information in general moving forward.

naushad-jamil commented 5 years ago

If this is about simply uploading a lambda function code (a .zip file for example) the approach seems pretty convoluted. if I can get an agent run PowerShell script from the build server, I could use AWS PS cli to upload the .zip file. So I see three things needed here

  1. A server from where a piece of PowerShell script will be invoked. Is there a way?
  2. PowerShell should be able to download the build artifact. Is there a way?
  3. The script need to be able to read an environment variable (that tells the lambda function name)
  4. The PS script now uses AWS cli to upload the .zip file to the lambda function.

I would like to try this approach but need some direction.