actions / runner

The Runner for GitHub Actions :rocket:
https://github.com/features/actions
MIT License
4.79k stars 939 forks source link

Using Environments without creating deployment automatically #2120

Open LaurenzReitsam opened 2 years ago

LaurenzReitsam commented 2 years ago

Current Situation: Every pipeline step that uses environments automatically creates a new deployment. This seems to be wanted behavior.

Problem Access to an environment might be also needed for other reasons than deployments. Like running integration tests (deployment already done; we want to assure correct behavior of latest deployment)

Possible Solution Can we add an option to avoid an automatic deployment always when using environment? An idea might be to set an environment variable like AUTO_DEPLOYMENT=false.

Additional information

flobernd commented 2 years ago

The exact same thing bugs me as well.

I would even go one step further and decouple the deployments from environments completely by default.

Imho every workflow should be able to be a deployment task - or not. Instead of using the environment to determine this, we could just have another string key in the yaml (similar to the "concurrency" key).

civitaspo commented 1 year ago

I have the same problem. I want to limit deployments only on protected branches after the pull request has been merged, but I want to use environment secrets at the time of the pull request as well. Specifically, it is a case of executing terraform plan in a pull request and terraform apply in an action that runs on a protected branch after the pull request is merged. Currently, it is not possible to limit the deployment to a protected branch while sharing the environment secrets to actions on non-protected branches.

jameslounds commented 1 year ago

Temporary Workaround: use github's api in actions to delete all deployments which match our sha. We can use the github-script action., which lets us use octokit.js in the action Our job will need the deployments: write permission.

- name: Delete Previous deployments
        uses: actions/github-script@v6
        with:
          script: |
            const deployments = await github.rest.repos.listDeployments({
              owner: context.repo.owner,
              repo: context.repo.repo,
              sha: context.sha
            });
            await Promise.all(
              deployments.data.map(async (deployment) => {
                # we can only delete inactive deployments, so let's deactivate them first
                await github.rest.repos.createDeploymentStatus({ 
                  owner: context.repo.owner, 
                  repo: context.repo.repo, 
                  deployment_id: deployment.id, 
                  state: 'inactive' 
                });
                return github.rest.repos.deleteDeployment({
                  owner: context.repo.owner,
                  repo: context.repo.repo,
                  deployment_id: deployment.id
                });
              })
            );
tianhuil commented 1 year ago

Same: would love to be able to disable with a yaml property like - autodeploy: false.

berkeli commented 1 year ago

Temporary Workaround: use github's api in actions to delete all deployments which match our sha. We can use the github-script action., which lets us use octokit.js in the action Our job will need the deployments: write permission.

- name: Delete Previous deployments
        uses: actions/github-script@v6
        with:
          script: |
            const deployments = await github.rest.repos.listDeployments({
              owner: context.repo.owner,
              repo: context.repo.repo,
              sha: context.sha
            });
            await Promise.all(
              deployments.data.map(async (deployment) => {
                # we can only delete inactive deployments, so let's deactivate them first
                await github.rest.repos.createDeploymentStatus({ 
                  owner: context.repo.owner, 
                  repo: context.repo.repo, 
                  deployment_id: deployment.id, 
                  state: 'inactive' 
                });
                return github.rest.repos.deleteDeployment({
                  owner: context.repo.owner,
                  repo: context.repo.repo,
                  deployment_id: deployment.id
                });
              })
            );

Works nicely, but the following line needs to be removed for it to work:

# we can only delete inactive deployments, so let's deactivate them first
yusijs commented 1 year ago

This would be great, my pull requests currently look like this: image

We use environments for builds etc as well (secrets), so it becomes a mess very quickly. Being able to specify the environment at the top level (before the jobs) might also help a bit, but ideally it would be possible to do something like this in the job definition:

environment:
  name: dev
  url: https://github.com
  deployment: false
SrBrahma commented 1 year ago

@jameslounds what about creating an Action for it in the Marketplace?

constgen commented 1 year ago

We already have a ton of actions for deployment creation and status update. It is not a necessary to create a new one. But my issue now is when I use a custom deployment actions for more control of deployment status, I have a duplicated deployment in history. Having one of these would help:

nagibyro commented 1 year ago

Just FYI if you're using @jameslounds workaround for workflows that run for pull requests, you need to modify the script, since the GITHUB_SHA is the last merge commit not the head commit that just got pushed to the PR branch (which is what the deployments are created with). You can add the github.event.pull_request.head.sha as an environment variable to the action:

  delete_github_deployments:
    runs-on: ubuntu-latest
    needs: run_tests
    if: ${{ always() }}
    steps:
      - name: Delete Previous deployments
        uses: actions/github-script@v6
        env:
          GITHUB_SHA_HEAD: ${{ github.event.pull_request.head.sha }}
        with:
          script: |
            const { GITHUB_SHA_HEAD } = process.env
            const deployments = await github.rest.repos.listDeployments({
              owner: context.repo.owner,
              repo: context.repo.repo,
              sha: GITHUB_SHA_HEAD
            });
            await Promise.all(
              deployments.data.map(async (deployment) => {
                await github.rest.repos.createDeploymentStatus({ 
                  owner: context.repo.owner, 
                  repo: context.repo.repo, 
                  deployment_id: deployment.id, 
                  state: 'inactive' 
                });
                return github.rest.repos.deleteDeployment({
                  owner: context.repo.owner,
                  repo: context.repo.repo,
                  deployment_id: deployment.id
                });
              })
            );

We had to add this as a job that ran at the end of the workflow instead of the last step of other jobs because the deployment didn't seem to always be deleted in that case. So you still see the message on the PR's till the whole pipeline runs which can still be confusing for folks but seems like the best you can do for now.

SrBrahma commented 1 year ago

@nagibyro Many thanks! So if I have 4 .yaml files, I add this to their end?

Yes! But probably just putting it on the longest one would do it. Maybe there is a way to create a workflow that runs after all of them?

kylebjordahl commented 1 year ago

I ran into a similar version of this where our deploy workflows are manually triggered (either via UI or GH CLI) and we supply a specific release ref to deploy; the deployments created by the workflow show the current state of main as the deployed ref, when in fact they should show the ref of the specified release.

I made this Action to handle our case, but feel free to fork it for your own purposes; hope it helps others until this gets sorted

amos-kibet commented 1 year ago

This action worked for me. It optionally deletes deployments and environments. You decide what to delete. strumwolf/delete-deployment-environment

kamilzzz commented 1 year ago

Any comments from the GitHub side regarding this issue? It has been reported 9 months ago.

Workarounds presented here work but they are well... workaround and I feel like this should be implemented by the GitHub Actions itself. There are several scenarios where workflow may target environment but is not a real deployment, like running Terraform plan or running integration tests as mentioned before.

Hronom commented 1 year ago

Same here, any answer for Enterprise customer?

fabasoad commented 1 year ago

Have the same problem. Also, on GHES. Would be great to have any updates on this. Thanks.

MichaelMHoff commented 1 year ago

Same problem here. We are running automated selenium tests with stage-dependent credentials and these "fake deployments" really mess things up as they are not only displayed for PRs, but also in our Jira integration

danielsantiago commented 1 year ago

I'm having the same problem with the Jira integration, each Job in the Workflow is treated as a different deployment.

craig-king commented 1 year ago

+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not

JonathanAtCenterEdge commented 1 year ago

+1, our workflows to bring down our infrastructure act as deployments due to this.

bombillazo commented 1 year ago

Our account is bloated with ephemeral environments/deployment simply because we want to use some environment variables as a setup.

Please add a feature to remove these deploys. if not it would be great to auto-delete them after the PR is merged.

hknutsen commented 1 year ago

+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not

This is my exact issue as well. I run "Terraform Plan" for multiple environments on PRs, and since they all count as deployments, it pollutes the PR.

tinogo commented 1 year ago

+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not

This is my exact issue as well. I run "Terraform Plan" for multiple environments on PRs, and since they all count as deployments, it pollutes the PR.

Same here! Things get even more annoying, when you've configured deployment approvals, too. In these situations you also have to approve each and every job which uses the environment, but isn't really deploying anything...

Therefore: +1 for the deployment: false-property or something like this. Or even better, follow Gitlab's approach: https://docs.gitlab.com/ee/ci/yaml/#environmentaction

AustinZhu commented 1 year ago

+1 from here as well. Terraform plans are a good example of targeting an environment but not "deploying" to it. We really need a flag to specify if this is a full deployment or not

Exactly!

mkarbo commented 1 year ago

https://github.com/orgs/community/discussions/36919#discussioncomment-6852220 see also this

TrevorSmith-msr commented 11 months ago

I have this same issue with a workflow that runs Terraform Plan.

bombillazo commented 11 months ago

We used the script suggested here (using the GH API) to manually delete the environments periodically. It's really bad this isn't available as an option built into the GH repo...

godd9170 commented 10 months ago

We are getting great value out of the environment specific environment variables for our frontend tests/builds, and are also continuously being confused about the deployment statuses. Is there a way perhaps to indicate the context for the job, so we can access the right vars, without using the environment key in it's definition?

stevage commented 9 months ago

It's currently basically impossible to have conversations on PRs because every conversation is completely flooded with this:

image

At the very least, runs of consecutive identical deployments should be automatically collapsed.

mikocot commented 8 months ago

I feel like those 2 issues are related and would benefit from this one. https://github.com/orgs/community/discussions/67727

https://github.com/orgs/community/discussions/67728

johanmolen commented 7 months ago

Would be great to have any updates on this.

RobertPaulson90 commented 7 months ago

This issue becomes even more pronounced when you need manual approval gates. I don't want to manually approve 3 times (terraform plan -> terraform apply -> az functionapp deployment). I only want to approve once, after the terraform plan. But because I need the same credentials across all jobs and the approval setting is on the environment, it results in 3 approval gates for the same environment while also flooding the deployment logs on every plan + apply.

An alternative (horrible) fix is to set secrets as global variables as opposed to within the environment. But I don't want my prod credentials visible from outside of the main branch...

Our enterprise is considering consolidating everything to GitHub, with hundreds of repos on other platforms, but this particular issue is a major shortcoming of the GH Actions product. This has been a problem for too long where tools like GitLab CI and Azure DevOps have easy solutions.

Please somehow offer a solution to decouple environment credentials from when an environment actually triggers a deployment (thereby also when a job can require an approval or not).

tornike commented 6 months ago

In my case deployments are happening from separate IaC repository and in application repos environments are used just for the context, github deployments there has no meaning other than to cause inconvenience and misunderstanding. There should be some option to disable automatic github deployment creation.

piotrekkr commented 6 months ago

This issue makes using environment scoped vars and secrets pretty irritating process that pollutes repository with useless deployments that could never actually happen.

Here is my use case:

As I run deploy workflow I get three deployments. First is manual one created by me, next from build job and one from deploy job. Since my manual deployment was older than the deploy job created it is not shown as latest deployment on env. I needed to create custom action to remove those unwanted deployments.

Same thing with other workflows that do not deploy anything but wants to access environment. Each run generates new deployment and I need to cleanup.

Why pollute such useful feature like environment scoped vars and secrets with deployments?

airtonix commented 5 months ago

we have many apps in our monorepo.

each app gets a set of environments like:

each contains a set of variables that control things from aws account id, build command all the way through to slack channel id.

During a actual deployment we like what this does:


environment: `${{inputs.Stage}}/s3/${{inputs.AppName}}`

steps:
  - uses: some/action
    with: 
      SlackChannelId: ${{vars.SLACK_CHANNEL_ID}}

...

but we have other workflows that are either on PRs or manually run workflows that are sensitive to the AppName.

Here, like everyone else in this thread; we find the association with the concept of a deployment to be counterproductive for this major reason:

Jira Integration with Github treats a "Github Deployment" as a literal Deployment. This means they end up on the Releases board and are marked as a release on tickets associated with the commits involved.

This ends up causing a lot of wasted time explaining things over and over again.

[!WARNING] There is a need to be able to define access to variables through the environment key without it triggering a deployment association.

Perhaps this semantic association should be removed as an implicit behaviour; Instead this association should be created by running an explicit action.

timmywil commented 5 months ago

FWIW, the jQuery team uses multiple environments during testing and I don't consider any of them deployments. The "deployment" messages are just noise to me.

Theodlz commented 5 months ago

I just started using secrets for github actions and honestly... I don't quite get why this is automatically coupled to deployments. I see how it can be useful for some, but most just want to use secrets for their CI, and the PRs flooded with "deployment inactive" messages make it somewhat painful to use. I mean, there is nothing being deployed whatsoever, and its confusing for the developer and the reviewer of a PR.

jelmerk commented 4 months ago

+1 for this. It makes no sense to tie these two things together

AndresPinerosZen commented 4 months ago

In my use case I have no problem if the deployment is created, but I would like to have control over the status of the deployment once the job is finished. I don't want the status to be "green", I'd like to leave it as "in-progress" so other tools can work with the status.

aheruz commented 4 months ago

+1

kumarpramod commented 3 months ago

+1

hikerspath commented 3 months ago

Functionally, environments should not equal a deployment. These trigger different resultant code to tools like MS DevOps, Jira, and others that track deployment events. It is functionally possible to require environment-based secrets but perform something like a terraform plan or other test suite which is not performing any action itself. I agree that there should be an override option of deployment: false or some sort of thing.

joelcoxokc commented 3 months ago

Hello... Any updates on this.. We spent a lot of time moving everything over to environments using git actions.. And now we have this massive mess with 15 deployment events being created from one deployment...

We need to environment secrets in each of 15 different actions...

But now we have 15 staging/production deployments... when it should only be one..

This should only be triggered after our final run. Please Please help!

Anemony22 commented 2 months ago

We're in the same boat. Would be great to be able to use the power of environment secrets without spamming the PR history.

We currently use environment secrets in actions to do a bunch of pre-deployment checks.

nicolasguillen-tamedia commented 2 months ago

+1

kylebjordahl commented 1 month ago

FWIW, we're still having this issue in an entirely new project (I commented on this over a year ago while working on a totally different product) so I solved it in a different way: use the gh cli in a composite action to fetch the environment's vars from the API.

It's not terribly elegant, and I had to set up an internal GitHub App in order to get a token with the proper permissions, but now that I have it set up it works like a charm. While I could make a reusable action to do this, most of the work is actually in the process of setting up the GitHub App and dealing with the token, which you'd have to do yourself anyway.

Would be great if GitHub just allowed us to specify whether a workflow should create a deployment or not 😢

andreujuanc commented 1 month ago

I'm thinking on ditching the environments and go back to using my fork of this

airtonix commented 1 month ago

I'm thinking on ditching the environments and go back to using my fork of this

Environments are more than just secrets. They also provide the manual approval step

andreujuanc commented 1 month ago

Environments are more than just secrets. They also provide the manual approval step I know, but we can't even use it for PRs. It's almost like GH devs don't use GH products.

ricardofalc commented 1 week ago

This is very, very annoying. Is it possible to introduce a yaml property to disable this?

image image