Closed renattomachado closed 5 years ago
Thanks for the feedback! We are currently investigating and will update you shortly.
Vote for this! Azure Pipeline host IP ranges https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=vsts&tabs=yaml#agent-ip-ranges
Vote for this! Azure Pipeline host IP ranges https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=vsts&tabs=yaml#agent-ip-ranges
@XiaoningLiu that's not usual.
@renattomachado
I came across this particular issue as well. If your application is not mission critical (that is, even a second of public access could cripple your business) I recommend the following:
az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --default-action Allow
az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --default-action Deny
Hopefully that helps you out.
@XiaoningLiu
Is there a way to get the IP range/public IP? I tried just using curl
on https://ipecho.net/ and adding that IP (all in a task while in a Pipeline), but it seems to not really work. Ideally we could just get the IP range from the Agent we're using, add it to whitelist, and just remove it immediately. That way we don't have to run some sort of job that parses the weekly set of IPs for our region(s).
Same issue here. Using Azure DevOps to deploy Azure resources with Terraform. Due to security requirements, we need to turn on VNET firewall rules. As soon as we turn it on, Terraform is not able to retrieve Storage account information (403) from Azure DevOps and the deployment pipeline breaks.
"Allow trusted Microsoft services to access this storage account" is enabled, but obviously Azure DevOps is not recognized as such...
Are there any news on this one? we want our storage accounts to be secured but we also want to deploy and use DevOps Pipelines. and having the same issues like everyone else here.
Here is how to find out which IP ranges are. https://visualstudio.microsoft.com/team-services/support/ip-addresses-used-hosted-build/
@artisticcheese
I think the issue is that usage of that XML would then require one to have some automation or process that maintains the blacklist and removes old IPs.
Ideally there would be a way to get the IP of the machine that you're using while running a hosted agent, that way you can do something similar to what I described above, except for a single machine (the one running the DevOps job).
Yes, in addition to that firewall seems to be limited to 100 entries only anyway, so this is not going to work.
We are working on a plan to enable "tag" or "alias" definitions to allow these kinds of ranges to be defined by the service. not ETA yet.
Why is this issue being closed when it's not been addressed? '...working on a plan... isn't really dealing with the issue.
@nickforr Since we are working on it. I don't have an ETA on this yet,
Currently experiencing the same issue, is there any update now?
@SumanthMarigowda-MSFT or @cbrooksmsft - Is there any way of tracking the feature request relating to this - even if you're not comfortable sharing an ETA it would be useful to be notified when it's been done. Thanks!
Similar to what @tabeth sugest, in many cases I was changin firewall rules of different resources (sql, function app et.c) to allow temporarly trafic from devops agent ip. However this is not working for storage accounts, because of this:
IP network rules have no effect on requests originating from the same Azure region as the storage account.
@cbrooksmsft Can you please explain why you asked for this issue to be closed? This is still broken. If this issue was raised in the wrong place, then can you please provide a link to where the replacement bug has been raised that is tracking this issue so that it will be resolved?
This issue was raised 24th November 2018. It is possible this was mistakenly asked to be closed without creating the correct replacement issue? I ask because we're now 2 years down the line and it looks like this bug still exists?
The work-arounds suggested here (of simply removing security) could put customer data at risk.
please advise,
txs Alan
They seemed to be planning to have it in Q2 2020 https://devblogs.microsoft.com/devops/azure-devops-roadmap-update-for-2020-q2/
https://dev.azure.com/mseng/AzureDevOpsRoadmap/_workitems/edit/1710676
@artisticcheese you linked to a very broad announcement. Which part of that announcement do you think addresses this issue? If you mean "service tags" then I dont this it addresses the issue? Service tags looks like a helpful grouping of rules per "tag", and the problem described here would still exist. (specifically that the local azure devops is not automatically whitelisted by the inclusion of --bypass "Logging Metrics AzureServices
Perhaps I've missed something?
To recap for anyone new to this thread; when I create an azure storage resource for use in azure devops pipeline as below
az storage account create -g "{resource-group-redacted}" `
-n "{storage-account-name-redacted}" `
-l "westeurope" `
--kind "StorageV2" `
--sku "Standard_RAGRS" `
--access-tier "Hot" `
--bypass Logging Metrics AzureServices `
--default-action "Deny" `
--https-only "true"
if I have a step in my azure devops pipeline that uses AzureFileCopy@3
e.g.
- task: AzureFileCopy@3
displayName: "Publish files to '$(my_storage)' storage"
inputs:
SourcePath: $(Build.ArtifactStagingDirectory)
azureSubscription: $(subscription)
Destination: AzureBlob
storage: $(my_storage)
ContainerName: $web
then this will fail with permission error,
Failed to validate destination. The remote server returned an error: (403) Forbidden.
When in fact (if the documentation is supposed to be believed) using --bypass AzureServices
when creating the storage account is supposed to provide azuredevops with permission to the storage account.
I believe this the crux of the problem, and nothing in the Q2 2020 announcement appears to address this specific issue?
I would love to be wrong on this. A
They seemed to be planning to have it in Q2 2020 https://devblogs.microsoft.com/devops/azure-devops-roadmap-update-for-2020-q2/
https://dev.azure.com/mseng/AzureDevOpsRoadmap/_workitems/edit/1710676
Not sure that it is planned. It specifically states: Service Tag for Microsoft Hosted Agents for Pipelines are not supported.
I also happen to experience the same issue. Anyone could officially confirm this is still planned?
I am unable to connect Azure Devops pipeline Microsoft hosted agent to storage account. Is this going to get resolved
@cbrooksmsft can you please reply to the questions above? you requested to close-issue but this does not seem to be resolved? If it has actually been resolved it would be considerate (professional) of Microsoft to at least comment on here to say, "hey guys this is fixed by doing X".
Same Issue
Reference to the upcoming changes for service tags. This still doesn’t resolve our issue:
The Service Tag does not apply to Microsoft Hosted Agents. Customers are still required to allow the entire geography for the Microsoft Hosted Agents.
Reference to the upcoming changes for service tags. This still doesn’t resolve our issue:
The Service Tag does not apply to Microsoft Hosted Agents. Customers are still required to allow the entire geography for the Microsoft Hosted Agents.
Reference to the upcoming changes for service tags. This still doesn’t resolve our issue:
The Service Tag does not apply to Microsoft Hosted Agents. Customers are still required to allow the entire geography for the Microsoft Hosted Agents.
Reference to the upcoming changes for service tags. This still doesn’t resolve our issue:
The Service Tag does not apply to Microsoft Hosted Agents. Customers are still required to allow the entire geography for the Microsoft Hosted Agents.
True, didn't read it properly 😓
I'm running into this same problem and I'm frankly surprised this issue was closed. My first thought was that I'm going to have to build a scheduled tool to parse their weekly JSON file here with all of the IP ranges, but they don't appear to provide an API for that. So my best option is to manually download the JSON file every week, feed it to some parsing process, and then push the IP ranges to allow (we're talking hundreds) to the storage account firewall settings? Surely there's a better way. C'mon Microsoft!
There is API, it just does not work (way out of date) https://docs.microsoft.com/en-us/rest/api/virtualnetwork/servicetags/list
Also it's more or less easy to automate (downloading/parsing JSON files etc) https://artisticcheese.wordpress.com/2020/08/17/automating-azure-sql-firewall-rules-based-on-azure-service-tags/
@artisticcheese, thanks for the info. Interesting thoughts, but I'm not comfortable scraping their webpage to pull that JSON file. This is a production app and that's too risky. And apart from that, this method would involve hundreds of entries in the storage account firewall to simply let my hosted Azure pipeline copy a few files over. It's doable, but it's overkill for my purposes.
Gentlemen, I've gathered you here today to plead for your vote! -> https://developercommunity.visualstudio.com/content/problem/1189404/azuredevops-dont-considerate-as-microsoft-services.html
This needs to be fixed ASAP, it's an ongoing issue for the last two years... share, tweet, help fix it and get it on Microsoft's radar 👍
@renattomachado @mimckitt @AdamS-MSFT @XiaoningLiu @tabeth @sesispla @SeiketsuJael @artisticcheese @cbrooksmsft @nickforr @SumanthMarigowda-MSFT @solaomoDevOps @shahiddev @marcin-vt @goblinfactory @artisticcheese @lymedo @felipecruz91 @pratimvengurlekar @justinimel @marcin-vt @gradyal
@BobbyCGD Voted
As a workaround, here is a task I use...
- bash: |
sudo apt-get -y install grepcidr
for d in {0..30}; do
date_string=`date -d "-${d} days" +%Y%m%d`
url="https://download.microsoft.com/download/7/1/D/71D86715-5596-4529-9B13-DA13A5DE5B63/ServiceTags_Public_${date_string}.json"
echo "Trying '${url}'"
curl -X GET -sfLO ${url}
if [ -f "ServiceTags_Public_${date_string}.json" ]; then
break
fi
done
cat ServiceTags*.json | jq -r '.values[].properties.addressPrefixes[]' > networks.txt
IP=`curl -s http://ipinfo.io/json | jq -r '.ip'`
echo "Current IP is '${IP}'"
grepcidr -f networks.txt <(echo "$IP") >/dev/null && echo "${IP} belongs to the trusted Azure Service tags addresses" || exit 1
echo "##vso[task.setvariable variable=AGENT_IP;issecret=true]${IP}"
displayName: Get agent IP
@lymedo @arkiaconsulting
I think so, I'm using a windows agent and have opted for not checking if the agent's IP address is in the list of IP addresses in the service tag. Here is the step I added to my pipeline
steps:
- bash: |
IP=`curl -s http://ipinfo.io/json | jq -r '.ip'`
echo "Current IP is '${IP}'"
echo "##vso[task.setvariable variable=agentIp;issecret=true]${IP}"
displayName: 'env: set $(agentIp)'
I then use
steps:
- task: AzureCLI@2
displayName: 'env: add agent_ip to firewall'
inputs:
azureSubscription: '***'
scriptType: ps
scriptLocation: inlineScript
inlineScript: 'az storage account network-rule add --account-name $(apimStorageAccountName) --ip-address $(agentIp)'
before trying to publish to the storage account and after I'm done publishing I use
steps:
- task: AzureCLI@2
displayName: 'env: add agent_ip to firewall'
inputs:
azureSubscription: '***'
scriptType: ps
scriptLocation: inlineScript
inlineScript: 'az storage account network-rule add --account-name $(apimStorageAccountName) --ip-address $(agentIp)'
@BobbyCGD If you don't check the IP found by ipinfo, you assume that it hasn't been hacked... Be careful on production environment !
@arkiaconsulting Definitely a risk, but in my specific use case not a problem. Deployments are always "supervised" worst thing that could happen is deployment step fails. In this case:
Sadly this https://developercommunity.visualstudio.com/content/problem/1189404/azuredevops-dont-considerate-as-microsoft-services.html. has been closed. Hopefully they will consider re-opening and looking into the issue.
I've not been keeping an eye on this thread...it's soo old, and yet it is still a problem. I don't think it's right this should be marked as "closed" ...I'll refrain from commenting ...but life has to go on ...so ...below is a workaround i'm using, that seems to be working well for me, i.e.
this approach may not work for you, but just in case it does here's a sample powershell script that works for me in any devops pipeline so far, you can experiment with it and see if it works in your pipeline as well?
https://gist.github.com/goblinfactory/1f75678c45b2917b29fcb5158550024c
the advantage of this powershell is that you can run it locally exactly the same as how it runs on devops build agent. easier to tweak when running locally.
hopefully useful.
best of luck
regards
Alan
This is still an issue. Please re-open this.
This is still an issue. Please re-open this.
Absolutely.. we're hitting issues with this & temporarily adding the IP to the firewall isn't working (not 100% sure why). It's a blocker when dealing with storage accounts in a vnet secured with a firewall :(
@e14mattc we're having the exact same issue you are. Sometimes adding the build agent's IP (we had to write code to get that ourselves) works, but sometimes it doesn't (who knows why---maybe it's changing). It's just a big deal when this could be so easy.
This is still an issue. Please re-open this.
Absolutely.. we're hitting issues with this & temporarily adding the IP to the firewall isn't working (not 100% sure why). It's a blocker when dealing with storage accounts in a vnet secured with a firewall :(
@e14mattc we're having the exact same issue you are. Sometimes adding the build agent's IP (we had to write code to get that ourselves) works, but sometimes it doesn't (who knows why---maybe it's changing). It's just a big deal when this could be so easy.
I just implemented the same "add the agent ip" fix for this last week, and I found I had to add a 90 second sleep script to my pipeline after adding the IP to guarantee the firewall changes take effect before the rest of my pipeline runs. That may be why you only see it working sometimes?
I do agree though, a better fix from MS would be welcome.
@jake-subvrsive I wrote some fancy PowerShell to retry up to 5x times with a 60 second delay in between and I still see failures, albeit rarely.
Sharing my code here in case it helps anyone who is also struggling. @microsoft please make our lives easier.
<#
.DESCRIPTION
Used to pass a script block (closure) to be executed and returned. The
operation is retried if PowerShell throws an error/exception. This works well
with the Azure PowerShell module.
.EXAMPLE
PS> Invoke-RetryPowerShellScriptBlock { Throw "error" }
.EXAMPLE
PS> Invoke-RetryPowerShellScriptBlock { Get-AzResource -ResourceGroupName xyz -ResourceName abc }
#>
function Invoke-RetryPowerShellScriptBlock {
[CmdletBinding()]
Param(
[Parameter(Position = 0, Mandatory = $true)]
[scriptblock] $ScriptBlock,
[Parameter(Position = 1, Mandatory = $false)]
[int] $MaximumRetries = 5,
[Parameter(Position = 2, Mandatory = $false)]
[int] $RetryDelay = 60
)
Begin {
$counter = 0
}
Process {
Do {
$counter++
Try {
$result = & $ScriptBlock -ErrorAction Stop
Return $result
}
Catch {
Write-Host $_
Write-Host $_.ScriptStackTrace
Write-Host $_.Exception
Write-Host $_.ErrorDetails
Write-Host $_.Exception.InnerException.Message
if ($counter -eq $MaximumRetries) {
Throw "Attempt $counter out of $MaximumRetries wasn't successful. Maximum number of retries exceeded."
}
else {
Write-Host "Attempt $counter out of $MaximumRetries wasn't successful. Sleeping for $RetryDelay seconds..."
Start-Sleep -Seconds $RetryDelay
}
}
} While ($counter -lt $MaximumRetries)
}
}
<#
.DESCRIPTION
Used to pass a script block (closure) to be executed and returned. The success
or failure of this block is determined by the return object from the script
block. This works well for the Azure CLI. The Azure CLI doesn't raise
exceptions/throw errors that PowerShell can catch. Instead, it simply will
return nothing. A return of nothing is how we're determining if there has been
an "error" here.
.EXAMPLE
Invoke-RetryAzCli {
az storage blob upload-batch --source my_dir_to_upload
--destination my_container_name --account-name my_account_name
--account-key xyz
}
#>
function Invoke-RetryAzCli {
[CmdletBinding()]
Param(
[Parameter(Position = 0, Mandatory = $true)]
[scriptblock] $ScriptBlock,
[Parameter(Position = 1, Mandatory = $false)]
[int] $MaximumRetries = 5,
[Parameter(Position = 2, Mandatory = $false)]
[int] $RetryDelay = 60
)
Begin {
$counter = 0
}
Process {
Do {
$counter++
$result = & $ScriptBlock -ErrorAction Stop
If (!$result) {
Write-Host $_
Write-Host $_.ScriptStackTrace
Write-Host $_.Exception
Write-Host $_.ErrorDetails
Write-Host $_.Exception.InnerException.Message
if ($counter -eq $MaximumRetries) {
Throw "Attempt $counter out of $MaximumRetries wasn't successful. Maximum number of retries exceeded."
}
else {
Write-Host "Attempt $counter out of $MaximumRetries wasn't successful. Sleeping for $RetryDelay seconds..."
Start-Sleep -Seconds $RetryDelay
}
}
Else {
Return $result
}
} While ($counter -lt $MaximumRetries)
}
}
<#
.DESCRIPTION
This is the same as Invoke-RetryAzCli, except it takes some additional
arguments specific to storage accounts---for some reason, storage accounts
seem to be problematic---and returns additional debugging information related
to the storage account.
.EXAMPLE
Invoke-RetryAzCliStorageAccount{
az storage blob upload-batch --source my_dir_to_upload
--destination my_container_name --account-name my_account_name
--account-key xyz
} -AccountName my_account_name -ResourceGroupName my_resource_group
#>
function Invoke-RetryAzCliStorageAccount {
[CmdletBinding()]
Param(
[Parameter(Position = 0, Mandatory = $true)]
[scriptblock] $ScriptBlock,
[Parameter(Position = 1, Mandatory = $true)]
[string] $AccountName,
[Parameter(Position = 2, MAndatory = $true)]
[string] $ResourceGroupName,
[Parameter(Position = 3, Mandatory = $false)]
[int] $MaximumRetries = 5,
[Parameter(Position = 4, Mandatory = $false)]
[int] $RetryDelay = 60
)
Begin {
$counter = 0
}
Process {
Do {
$counter++
$result = & $ScriptBlock -ErrorAction Stop
If (!$result) {
Write-Host "---- Logging Exceptions ----"
Write-Host $_
Write-Host $_.ScriptStackTrace
Write-Host $_.Exception
Write-Host $_.ErrorDetails
Write-Host $_.Exception.InnerException.Message
Write-Host "---- What's My IP? ----"
Get-MyIp
Write-Host "---- Storage Account Network Rules From Az CLI ----"
az storage account show --name $AccountName --resource-group $ResourceGroupName
if ($counter -eq $MaximumRetries) {
Throw "Attempt $counter out of $MaximumRetries wasn't successful. Maximum number of retries exceeded."
}
else {
Write-Host "Attempt $counter out of $MaximumRetries wasn't successful. Sleeping for $RetryDelay seconds..."
Start-Sleep -Seconds $RetryDelay
}
}
Else {
Return $result
}
} While ($counter -lt $MaximumRetries)
}
}
@ericchansen Using the "add agent ip" fix didn't work for us either. I was able to determine the root cause of the random failures by querying the VM metadata of the agent. Here's the scoop:
The "add agent ip" logic is only for the public IP address, since you cannot add a private IP address to a resources. Whenever the agent spawned within the same region as our resource, it was using the private IP address to connect. You can see the note in this document here about how intra-region IP network rules do not work.
Note
IP network rules have no effect on requests originating from the same Azure region as the storage account. Use Virtual network rules to allow same-region requests.
To confirm this theory, I added a task that did something like this to query the agent's metadata. Anytime the pipeline failed, the location in the JSON response matched the resource's location. Proving it was using the agents private IP address to connect to the resource. The successful runs all had a different region, which means it was using the public IP address.
Not saying this is a solution, but I want to share our experience with the VM Scale-set option: We ended up using a VM Scale-set deployed within a trusted VNET that we are able to add to our resources. I was surprised how easy it was to setup. And there is no long-term management of the VMs either since they are ephemeral. Azure Devops' integration handles a lot of the work including the scaling events. And it scales to zero VMs too, so we only pay for the VMs when we're running deployments.
The only problem we ran into with this was the base Ubuntu VM doesn't have some of the common packages you get out-of-the-box with an Azure Devops agent - like the Azure CLI, unzip (for Terraform install task), jq, etc. So we added a simple task to install these packages during runtime; eliminating the need to manage a custom VM image.
Hope this helps.
Thank you @jonmaestas. That seems better than my hypothesis, which was that the build agent might simply change IP's mid-job. If I get the time to confirm, I'll reply back here.
My opinion still stands that this is a major headache for a variety of users and @microsoft should re-open this issue.
This is still an issue. Please re-open this!!!!!!!!
Just spent a couple of hours trying to set up a "quick and easy" CD pipeline for our static content, but this pretty effectively stopped that track dead. Back to manual publishing...
I'm using Storage Account to upload files with AzureDevops Release pipelines. On my container in "Firewalls and virtual networks" I check the option "Allow trusted Microsoft services to access this storage account", but my release fails. Only I check "All networks" that my build success.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.