dsccommunity / AzureDevOpsDsc

This module contains DSC resources for deployment and configuration of initially Azure DevOps Services and later Azure DevOps Server.
MIT License
4 stars 6 forks source link

AzureDevOpsDsc: Running integration test in pipeline #9

Open johlju opened 3 years ago

johlju commented 3 years ago

The integration tests are failing due to a lack of an Integration environment (Azure DevOps Services instance) and related API key.

In order to resolve the integration tests, the following variables need adding to the build/pipeline:

... and...

Originally posted by @SphenicPaul in https://github.com/dsccommunity/AzureDevOpsDsc/issues/7#issuecomment-766335577

johlju commented 3 years ago

The problem is that the integration test cannot run for PR's since the secret pipeline values are not added to the build for PR's. That will make the build fail on main branch only, and then its "to late" to fix the build issue for the PR. So I'm thinking we should look at making the integration tests to run manually and each contributor can set there own "destructible" DevOps tenant.

johlju commented 3 years ago

The part of running the integration tests is commented out, so if there is not a good way to run the integration tests for PR's this entire job should be removed (if moving to running them manually).

https://github.com/dsccommunity/AzureDevOpsDsc/blob/8d43d12989359b8facdf46d86011a34dfe787898/azure-pipelines.yml#L194-L221

SphenicPaul commented 3 years ago

Some initial thoughts:

SphenicPaul commented 3 years ago

Additionally, this is relevant to documentation here which will need amending as appropriate.

johlju commented 3 years ago

Agree we need integration test.

Pull the API key and URI of a PR-author-specific environment (or a as part of the PR in some other way?)

I don't see any secure way of getting the API key for a PR that couldn't leak to other contributors/maintainers. 🤔

Trigger a PR-author-specific-integration build (in their own, public organization)

There might be the only way. Do not run integration tests in the PR, but instead the contributor must configure Azure Pipelines against there fork. We add documentation how a contributor creates a "destructible" Azure DevOps tenant on top of the one the fork is connected to so integration tests can run for the working branches in the fork. Then we add an entry to the PR template that clearly says that a PR must include a link to a test run that passes.

We run the integration tests on merge to main. We verify the PR in the contributors pipeline. It might be a slim risk that it will fail in main. It is a lot of work to contribute though, and more work to review.

If there is a resource (or simple script for now) for installing the OnPrem/Server version of Azure DevOps, these integration tests could be run directly on the build server

This would be the best way to go. Looking at the software requirements it could be feasible looking at the Microsoft hosted-agents hardware it is using a Standard_DS2_v2 that has 7GB of memory. So if we can limit the SQL Server to 2GB (or 1GB) then run Azure DevOps server can run on (the not recommended) 2GB memory, then it could work.

johlju commented 3 years ago

I have create a new Azure DevOps organization (https://dev.azure.com/azuredevopsdsc/) that is destructible, and added a PAT for that which is available when running builds on main. The organization is entirely separate with its own accounts so we could potentially add maintainers to it if needed.

This does not solve this issue, but at least the tests that exist and new that is created can run. There will be a problem if merging a PR and the build fails on integration tests.

SphenicPaul commented 3 years ago

Useful SQL Server installation information from SqlServerDsc project:

Useful links for Azure DevOps Server 2020 RTW (from here):

SphenicPaul commented 3 years ago

Just as an update... I've been taking a look at this and am now at a point where Azure DevOps installer/EXE is downloaded, then Azure DevOps Server is installed and configured automatically/programatically.

So far, it takes about 25 minutes on a hosted, build server at present (the majority of that time is the installation itself but there might be ways of installing non-used components to speed this up at some point).

Currently, I am trying to (and need to) determine:

Also note that Azure DevOps will seemingly install SQL Server Express as part of the configuration step so I've been able to perform a successful (according to the logs, but I've not been able to connect into it yet) installation and configuration of 'BasicInstall' Azure DevOps Server without having to perform a specific, SQL Server installation.

I'm unclear what features using this default, SQL Server Express will include/omit from the Azure DevOps Server instance at present but I've been trying to focus on getting a basic, Azure DevOps Services instance running before considering anything else.

SphenicPaul commented 3 years ago

I might park the above for a bit and have a think on it, and possibly look at alternative options... specifically around running (or confirming the run of) a contributor-specific pipeline/build (in their own Azure DevOps Server Organization) as part of the PR build based on the same, PR commit from the same repository.

Some notes:

I'm slightly steering towards this being my preferred solution for this (if I can get to something that is workable) as this avoids the 25+ minute, pre-integration test, setup time (see my previous comment/post) and allows the integration tests to be completely within a few minutes (at present) - Nice, fast feedback from the integration tests 😄.

We'd also be testing against an up-to-date version/instance - It would remove a little maintenance relating to upgrading integration test target instances etc. and give us relatively quick visability of changes to the API breaking functionality in the module.

The setup of the builds for new contributors is still likely to be more time-consuming using this approach though.

SphenicPaul commented 3 years ago

And also, just putting it out there (even though it's bad practice, and potentially, higher risk)...

What are the downsides/risks of making the variable (in the build pipeline) that holds the PAT into a non- sensitive variable (so the PR builds can use/see it), and ensuring the scope is limited to the resources that can be managed by it.

I'd guess the PAT would no-longer be protected (and effectively public to those that would want to create a PR/change/whatever to uncover it as it wouldn't be suppressed in the logs - although could potentially create a second variable (that was sensitive) with the same value/PAT to obsfucate this?).

The PAT would only be providing access to teardown 'organization' and any resources in it (assuming it is scoped and created correctly) so the likely problems resulting from this would be where people/public want to deliberately mess up resources within the instance to hinder/impact the build, integration environment (which would/could be "reset" during a build anyway).

This would also mean that the PRs couldn't run in parallel as they would all be running against the same instance (and there would have to be some lock mechanism in the build to prevent this).

This seems like it may be less work than the other options? Making a PAT, deliberately visable (even though it's a little work to get hold of and if it grants access to little of any use/value) does seem non-prefered.

Thoughts?

SphenicPaul commented 3 years ago

Also, I've just found the following Microsoft answer which might suggest that obtaining an API key via the API is not going to be an option (with reference to the option of installing a build server, copy of Azure DevOps Server)...

https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=preview-page#q-is-there-a-way-to-renew-a-pat-via-rest-api

... which might suggest some form of automation via the GUI would be required to obtain an API key.