Closed ifeanyindukwe closed 1 month ago
@ifeanyindukwe We would need to understand your inputs to investigate this. Are you using the local
option?
I am using Azure Pipeline Agent Pools in Azure DevOps, and I've set my azure_devops_personal_access_token to token-1 in my inputs.yaml file. However, I'm not quite sure I understand what you mean by the 'local option.' Could you please provide more details or clarify if my response is not addressing your question?
@ifeanyindukwe We would need to understand your inputs to investigate this. Are you using the
local
option?
Please find attached my input.yaml file. Identifiable information such as subscription IDs, emails, and generated tokens have been masked with "XXXX" for privacy reasons.
Thanks for this info. I see you are using Microsoft-hosted agents use_self_hosted_agents: "false"
and have use_private_networking: "true"
. This combination is not possible and should be handled by the code, it should be ignoring the use_private_networking
option. So it looks like you may have uncovered a bug here, that we'll investigate. In the meantime, setting use_private_networking: "false"
should hopefully fix it for you.
A fix has been released now in v3.0.2 of the bootstrap modules.
I have successfully set use_private_networking: false
as you suggested, and it resolved the issue. However, I had to first destroy the ALZ that was already deployed and then remove the bootstrap environment from Azure DevOps. Is there an alternate way to update through Azure DevOps settings without having to destroy and redeploy it entirely and rerun the bootstrap from scratch?
I have successfully set
use_private_networking: false
as you suggested, and it resolved the issue. However, I had to first destroy the ALZ that was already deployed and then remove the bootstrap environment from Azure DevOps. Is there an alternate way to update through Azure DevOps settings without having to destroy and redeploy it entirely and rerun the bootstrap from scratch?
I'm not sure why you had to destroy it, you should have been able to update it in place? The bootstrap stores a state file that it uses to plan for just the changes. Did you delete the output folder?
I was using a different device and initially thought that the bootstrap was a one-time process to orchestrate the initial resources in Azure and Azure DevOps. I assumed that after the initial setup, it wouldn’t be needed again. It’s interesting to learn that the bootstrap stores the state file, which allows for in-place updates rather than requiring a full redeployment. I’ll definitely keep this in mind moving forward.
However, when migrating from v3.0.1 to v3.0.2, I encountered the following error:
I executed the command:
Deploy-Accelerator -inputs "c:\accelerator\config\inputs.yaml" -output "c:\accelerator\output"
and accepted all the prompts with Y
to proceed with the migration from v3.0.1 to v3.0.2.
The error I received is as follows:
Error: Update repository file failed, repositoryID: XXXX, branch: refs/heads/main, file: .pipelines/cd.yaml . Error: TF402455: Pushes to this branch are not permitted; you must use a pull request to update this branch.
│
│ with module.azure_devops.azuredevops_git_repository_file.alz[".pipelines/cd.yaml"],
│ on ..\..\modules\azure_devops\repository_module.tf line 11, in resource "azuredevops_git_repository_file" "alz":
│ 11: resource "azuredevops_git_repository_file" "alz" {
Could you please advise on how to resolve this issue?
I was using a different device and initially thought that the bootstrap was a one-time process to orchestrate the initial resources in Azure and Azure DevOps. I assumed that after the initial setup, it wouldn’t be needed again. It’s interesting to learn that the bootstrap stores the state file, which allows for in-place updates rather than requiring a full redeployment. I’ll definitely keep this in mind moving forward.
However, when migrating from v3.0.1 to v3.0.2, I encountered the following error:
I executed the command:
Deploy-Accelerator -inputs "c:\accelerator\config\inputs.yaml" -output "c:\accelerator\output"
and accepted all the prompts with
Y
to proceed with the migration from v3.0.1 to v3.0.2.The error I received is as follows:
Error: Update repository file failed, repositoryID: XXXX, branch: refs/heads/main, file: .pipelines/cd.yaml . Error: TF402455: Pushes to this branch are not permitted; you must use a pull request to update this branch. │ │ with module.azure_devops.azuredevops_git_repository_file.alz[".pipelines/cd.yaml"], │ on ..\..\modules\azure_devops\repository_module.tf line 11, in resource "azuredevops_git_repository_file" "alz": │ 11: resource "azuredevops_git_repository_file" "alz" {
Could you please advise on how to resolve this issue?
This is a known issue that I plan to take another look at, in the meantime you can just manually turn off branch protection or turn it off in the inputs: https://github.com/Azure/ALZ-PowerShell-Module/issues/136
Thank you for your valuable feedback. I will give it a try and see how it works.
Thank you @jaredfholgate and to everyone involved for the support and valuable feedback! The issue was resolved after following the suggestions provided by the team. I successfully migrated from v3.0.1 to v3.0.2 by temporarily turning off branch protection and rerunning the deployment. Everything functions as expected, and I'm excited to continue using the accelerator.
However, I observed that during the upgrade, my existing files in the repository (e.g., main.tf, locals.tf, terraform.tf, variables.tf, config.tf
) were overwritten. It is important for others to be aware of this before performing any future upgrades. Please ensure to back up custom configurations to avoid any loss of work.
I appreciate the quick response and look forward to any future improvements. Thanks again for the assistance!
The deployment of the Azure Landing Zone consistently fails during the Terraform initialization (terraform init) stage. The deployment only succeeds when I manually alter the Azure Storage Account’s network configuration. Specifically, I need to change the network access setting from its default (restricted access) to "Enabled from all networks" in the Azure Portal.
Steps to Reproduce:
Encountered a 403 Authorization Failure when attempting to initialize Terraform with Azure. Error: Failed to get existing workspaces: containers.Client#ListBlobs: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailure" Message="This request is not authorized to perform this operation.\nRequestId:XXXX\nTime:2024-08-26T01:37:53.6259929Z"
Please investigate why the default network configuration is causing the deployment to fail and recommend a solution that doesn't require manual intervention.