Closed JaMatus closed 3 years ago
This issue is marked default for generating issues report.
We are running into a similar issue on Linux runners.
The process '/usr/bin/az' failed because one or more lines were written to the STDERR stream
@mburuckercSO @JaMatus Can you please share the workflow file?
@t-dedah sure 🙂 here is the release workflow file:
name: DEV .NET Core CD ARM template
on:
push:
branches: [ dev ]
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout@v2
- name: Set up .NET Core
uses: actions/setup-dotnet@v1
with:
dotnet-version: 3.1
- name: Build with dotnet
run: dotnet build --configuration Release
- name: dotnet publish
run: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp
- uses: actions/upload-artifact@v2
with:
name: my-artifact
path: ${{env.DOTNET_ROOT}}/myapp
deploy-dev:
runs-on: windows-latest
needs: build
steps:
- uses: actions/checkout@v2
- uses: actions/download-artifact@v2
with:
name: my-artifact
path: ${{env.DOTNET_ROOT}}/myapp
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS_DEV }}
- name: Run ARM deploy
uses: azure/arm-deploy@v1
id: arm-deploy
with:
subscriptionId: ${{ secrets.AZURE_SUBSCRIPTION }}
resourceGroupName: ${{ secrets.AZURE_RG_DEV }}
template: ./deployment/backend.json
parameters: ./deployment/backend.dev.params.json
sqlAdministratorLogin=${{ secrets.SQLADMINISTRATORLOGIN_DEV }}
sqlAdministratorLoginPassword=${{ secrets.SQLADMINISTRATORLOGINPASSWORD_DEV }}
jwtTokenSecret=${{ secrets.JWTTOKENSECRET_DEV }}
sendgridEmailApiKey=${{ secrets.SENDGRIDEMAILAPIKEY_DEV }}
- name: Get WebApp publishing profile
run: |
$publishingProfile = $( `
az webapp deployment list-publishing-profiles `
--name ${{ steps.arm-deploy.outputs.webAppName }} `
--resource-group ${{ secrets.AZURE_RG_DEV }} `
--subscription ${{ secrets.AZURE_SUBSCRIPTION }} `
--xml `
) && `
echo ::add-mask::$publishingProfile && `
echo ::set-env name=PUBLISHING_PROFILE::$publishingProfile
- name: Deploy to Azure Web App
uses: azure/webapps-deploy@v2
with:
app-name: '${{ steps.arm-deploy.outputs.webAppName }}'
slot-name: 'production'
publish-profile: ${{ env.PUBLISHING_PROFILE }}
package: ${{env.DOTNET_ROOT}}/myapp
This issue is marked default for generating issues report.
I am experiencing the same issue. It does not matter which ARM template you are deploying. It fails whatever you are trying to deploy.
I am seeing this issue too. There seems to be a warning being displayed before the failure,not sure if it is related but here is the relevant log from a run today. I hadn't seen any warnings prior to todays runs.
2021-06-03T10:54:38.6823203Z ##[endgroup]
2021-06-03T10:54:40.5955013Z Validating template...
2021-06-03T10:54:45.7235801Z ##[warning]
2021-06-03T10:54:45.7983377Z Creating deployment...
2021-06-03T10:54:54.8856051Z ##[error]
2021-06-03T10:55:25.9608960Z ##[error]
2021-06-03T10:55:26.0173126Z ##[error]The process '/usr/bin/az' failed because one or more lines were written to the STDERR stream
This issue is marked default for generating issues report.
I am seeing this issue as well. I will mention the Azure portal is showing the deployment as succesful, and I have verified that the resources have been created/changed.
I am seeing this issue too. There seems to be a warning being displayed before the failure,not sure if it is related but here is the relevant log from a run today. I hadn't seen any warnings prior to todays runs.
2021-06-03T10:54:38.6823203Z ##[endgroup] 2021-06-03T10:54:40.5955013Z Validating template... 2021-06-03T10:54:45.7235801Z ##[warning] 2021-06-03T10:54:45.7983377Z Creating deployment... 2021-06-03T10:54:54.8856051Z ##[error] 2021-06-03T10:55:25.9608960Z ##[error] 2021-06-03T10:55:26.0173126Z ##[error]The process '/usr/bin/az' failed because one or more lines were written to the STDERR stream
I can confirm this. Validation still works fine, but also shows the warning.
I can confirm that I am seeing the same errors and my deployment was successful
Seeing the same issue also this morning
Seeing the same issue.
If it's related to az cli change, I think it would be good to add azcliversion
parameter same as https://github.com/Azure/cli, so that az cli updates are not picked automatically.
This issue is marked default for generating issues report.
I wonder if it's related to https://github.com/Azure/azure-cli/issues/18262 (though that was reported earlier, for 2.24.0).
Seeing this issue as well. Is anyone aware of any workarounds?
Seeing this issue as well. Is anyone aware of any workarounds?
If you don't need any outputs we simply used continue_on_error: true
I did, @elizabethlfransen, but as it turns out the outputs are missing in the from the step. Before you ask, I double checked that outputs actually exist from the deployment, and they do. However, they don't seem to be bound in the GH Action step.
@iamalexmang Yeah when it errors out there wont be any outputs. I was lucky to be in a case where we didn't need the outputs
This issue is marked default for generating issues report.
I have applied a workaround in my fork: https://github.com/Azure/arm-deploy/compare/main...ashmind:main It basically ignores error output if that's just whitespace.
I updated all my workflows to uses: ashmind/arm-deploy@main
, and they seem to succeed so far (though I recommend using ashmind/arm-deploy@39619d594c45f488f8ff2a86d08149c066349c72
as you should have no reason to trust future state of the main branch in a random repository).
Though actual fix should be in az cli (remove whitespace error output) and not this action. So I am not creating a PR yet.
Seeing the same issue. If it's related to az cli change, I think it would be good to add
azcliversion
parameter same as https://github.com/Azure/cli, so that az cli updates are not picked automatically.
I think this would be a great fix. I tried to chase down where the version of az cli that is used in this action comes from so that I could attempt a PR for this change. It seems though that this (and the other az actions like login) just rely on the az
that is globally installed in the GitHub self hosted runners.
So I think trying to make this change gets quite complicated due to the fact that az
would either need removing from the base runner or there would need to be a mechanism to install a different version somewhere (on top of existing one?) that all of these other az actions could then find.
I wonder if it would be better if each az action first checked to see whether the user specified version of az
was installed and if not did the installation? And if the user didn't specify a particular version of az
then it would fallback the one that was already present on the system, or install the latest if that wasn't found?
We are looking into the issue, will update as soon as we get more info.
This issue is marked default for generating issues report.
thanks @ashmind yes the issue is with the whitespace we are getting in the stderr listener. Your changes are a perfect short term solution but we are a little hesitant as it fundamentally changes the way errors are being treated. As the problem is with azure-CLI in this case the final solution needs to be from their side.
Merged a temporary fix. Please use this until we get a permanent solution from azure-cli team.
- uses: azure/arm-deploy@main
instead of using v1
Any updates on when this issue will be resolved?
@deep-mm please use this as a solution.
- uses: azure/arm-deploy@main
instead of using v1
@t-dedah with that fix, deployment in github is green, but app is not deployed correctly for us.
@JaMatus Please try again, deployment should work correctly now.
This issue is marked default for generating issues report.
I can confirm that, today, switching from @v1 to @main succeeded. Pipelines I had which were succeeding for weeks (github-hosted runners) started failing this morning with the same behavior shown above. Switched to @main and the pipelines succeeded.
This issue is marked default for generating issues report.
@t-dedah deployment is working now. Thanks :)
This issue is marked default for generating issues report.
Hi @t-dedah - Do you know when/why is BufferSource
an empty string? Thanks!
This issue is marked default for generating issues report.
Using arm-deploy@main
, I have now the opposite - the deployment was unsuccessful but the pipeline doesn't notice and continues. The output still indicates failure though, which remains undetected.
Run azure/arm-deploy@main
Changing subscription context...
Validating template...
Warning:
Creating deployment...
Error:
ERROR: (DeploymentFailed) At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.
@dico-harigkev Same exact symptom here. I have had to go through my action run histories, task by task, to check if any failed even though showing succeeded. I have found this on dozens of task executions since switching to @main.
I tried reverting to @v1 since I saw Azure CLI has revved to 2.24.2, and it was at 2.24.0 when I was encountering the @v1 problems but v1 still doesn't work for me, fails tasks that seem to succeed. So I switched back to @main since that at least will let a workflow complete.
Hoping we get an update to @v1 soon that addresses the issues.
This issue is marked default for generating issues report.
Merged a temporary fix. Please use this until we get a permanent solution from azure-cli team.
- uses: azure/arm-deploy@main
instead of using v1
Hi @t-dedah, is there any ETA on a permanent fix for v1?
This issue is marked default for generating issues report.
@parallo-mattallford @plzm @dico-harigkev fix has been released by azure-cli team but public agent pool is yet to be updated to use upgrade azure-cli, I am trying to get updates for this.
@dico-harigkev @plzm Can you please share more info to investigate better
This is the schedule for next image rollout for the public agents. After this new fix should be available to everyone. Please test with V1 after this. Orgs with self-hosted agents should update their azure-cli to 2.25.0
Windows 2019 - Fri /Mon Other platforms - Wed next week
@dico-harigkev @parallo-mattallford @plzm we are looking at the issue of action reporting success even if deployment fails. In the meanwhile, a workaround @t-dedah mentioned above is to use a self hosted runner with installed CLI version 2.25.0
@t-dedah / @bishal-pdMSFT - can we please look into setting the az cli version if the task is going to take that dependency? Otherwise, we expose our customers to breaking changes like this with no good workarounds.. Alternatively, can we take advantage of the version in the task name (i.e. azure/arm-deploy@1.1
)? That is what it should be used for, no?
For the other folks on the thread that have experienced this issue, why not use the azure/CLI@v1
task and deploy the template directly with the az deployment * create ...
command? This task gives you direct control over which CLI version you want to use. What is the benefit that the ARM deployment task is providing for you?
@t-dedah / @bishal-pdMSFT - can we please look into setting the az cli version if the task is going to take that dependency? Otherwise, we expose our customers to breaking changes like this with no good workarounds.. Alternatively, can we take advantage of the version in the task name
azure/arm-deploy@1.1
? That is what it should be used for, no?For the other folks on the thread that have experienced this issue, why not use the
azure/CLI@v1
task and deploy the template directly with theaz deployment * create ...
command? This task gives you direct control over which CLI version you want to use. What is the benefit that the ARM deployment task is providing for you?
@ashmind and I made this point above about making the CLI version a parameter. I think it should be decoupled from the version number of this action though and supplied in the with:
block.
I did briefly look into making this change, but it looks like the AZ cli is pre-installed on the hosts so this action is just implicitly relying on whatever version is already installed. I think that means it would be quite a bit of work to allow the version to be set as an install step for the CLI would have to be added first. Maybe a setup-az-cli
action which people could run at the top of their workflow would be nice and it would just clobber any versions that are pre-installed on the machine.
I do agree with you though that this wrapper action seems to be causing more harm than good given it's such a thin layer of indirection.
Also I think all of the errors being reported here about the action reporting success even though the deployment failed are a direct consequence of the workaround that's been applied to main
. I believe #50 disabled checking stderr
because the CLI was erroneously writing whitespace to stderr
, so it doesn't seem surprising that as a consequence of this workaround the action will now succeed even when they are errors.
@parallo-mattallford @plzm @dico-harigkev I tried to repro the issue with a template deployment but it succeeded for me. Will it be possible to provide me with repro steps ?
Hello 🙂 Since the new release (1. June 2021) of Azure CLI Version 2.24.1, our github actions deployment to azure fails on azure/arm-deploy@v1 with following error:
The process 'C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin\az.cmd' failed because one or more lines were written to the STDERR stream
Have you experienced similar problem or this is not related in your opinion?
Thanks