Open johlju opened 3 years ago
The problem is that the integration test cannot run for PR's since the secret pipeline values are not added to the build for PR's. That will make the build fail on main branch only, and then its "to late" to fix the build issue for the PR. So I'm thinking we should look at making the integration tests to run manually and each contributor can set there own "destructible" DevOps tenant.
The part of running the integration tests is commented out, so if there is not a good way to run the integration tests for PR's this entire job should be removed (if moving to running them manually).
Some initial thoughts:
SqlServerDsc
module), significant amount of work with the initial resource to get to that point and there might be memory lmiitations on the build agents.Additionally, this is relevant to documentation here which will need amending as appropriate.
Agree we need integration test.
Pull the API key and URI of a PR-author-specific environment (or a as part of the PR in some other way?)
I don't see any secure way of getting the API key for a PR that couldn't leak to other contributors/maintainers. 🤔
Trigger a PR-author-specific-integration build (in their own, public organization)
There might be the only way. Do not run integration tests in the PR, but instead the contributor must configure Azure Pipelines against there fork. We add documentation how a contributor creates a "destructible" Azure DevOps tenant on top of the one the fork is connected to so integration tests can run for the working branches in the fork. Then we add an entry to the PR template that clearly says that a PR must include a link to a test run that passes.
We run the integration tests on merge to main
. We verify the PR in the contributors pipeline. It might be a slim risk that it will fail in main
. It is a lot of work to contribute though, and more work to review.
If there is a resource (or simple script for now) for installing the OnPrem/Server version of Azure DevOps, these integration tests could be run directly on the build server
This would be the best way to go. Looking at the software requirements it could be feasible looking at the Microsoft hosted-agents hardware it is using a Standard_DS2_v2 that has 7GB of memory. So if we can limit the SQL Server to 2GB (or 1GB) then run Azure DevOps server can run on (the not recommended) 2GB memory, then it could work.
I have create a new Azure DevOps organization (https://dev.azure.com/azuredevopsdsc/) that is destructible, and added a PAT for that which is available when running builds on main
. The organization is entirely separate with its own accounts so we could potentially add maintainers to it if needed.
This does not solve this issue, but at least the tests that exist and new that is created can run. There will be a problem if merging a PR and the build fails on integration tests.
Useful SQL Server installation information from SqlServerDsc
project:
DownloadExeName
, DownloadExePath
, DownloadIsoName
, DownloadIsoPath
Useful links for Azure DevOps Server 2020 RTW (from here):
Just as an update... I've been taking a look at this and am now at a point where Azure DevOps installer/EXE is downloaded, then Azure DevOps Server is installed and configured automatically/programatically.
So far, it takes about 25 minutes on a hosted, build server at present (the majority of that time is the installation itself but there might be ways of installing non-used components to speed this up at some point).
Currently, I am trying to (and need to) determine:
How to generate a Personal Access Token (PAT) - There might be a function for doing this via the API, but it's unclear if this API call will work with a user's password and not a PAT on the intial call, or can the user has to log in using a different mechanism? - Can't really use the API to generate a PAT if we need a PAT to use it - Not sure if use of Selenium WebDriver or similar could be used to work through the steps via the GUI in some way.
I'm not sure on how to proceed with this one at present. Any thoughts welcome.
Also note that Azure DevOps will seemingly install SQL Server Express as part of the configuration step so I've been able to perform a successful (according to the logs, but I've not been able to connect into it yet) installation and configuration of 'BasicInstall' Azure DevOps Server without having to perform a specific, SQL Server installation.
I'm unclear what features using this default, SQL Server Express will include/omit from the Azure DevOps Server instance at present but I've been trying to focus on getting a basic, Azure DevOps Services instance running before considering anything else.
I might park the above for a bit and have a think on it, and possibly look at alternative options... specifically around running (or confirming the run of) a contributor-specific pipeline/build (in their own Azure DevOps Server Organization) as part of the PR build based on the same, PR commit from the same repository.
Some notes:
AzureDevOpsDsc
organization) and remote (contributor organization) - Possibly in a config file and make use of .gitattributes
file to omit this file from final merge into 'main':
azure-pipelines.yml
if needed (e.g. if PR build or non-PR build, or if in DSCCommuinity AzureDevOpsDsc
build/org etc.) to determine steps/jobs to be runmain
I'm slightly steering towards this being my preferred solution for this (if I can get to something that is workable) as this avoids the 25+ minute, pre-integration test, setup time (see my previous comment/post) and allows the integration tests to be completely within a few minutes (at present) - Nice, fast feedback from the integration tests 😄.
We'd also be testing against an up-to-date version/instance - It would remove a little maintenance relating to upgrading integration test target instances etc. and give us relatively quick visability of changes to the API breaking functionality in the module.
The setup of the builds for new contributors is still likely to be more time-consuming using this approach though.
And also, just putting it out there (even though it's bad practice, and potentially, higher risk)...
What are the downsides/risks of making the variable (in the build pipeline) that holds the PAT into a non- sensitive variable (so the PR builds can use/see it), and ensuring the scope is limited to the resources that can be managed by it.
I'd guess the PAT would no-longer be protected (and effectively public to those that would want to create a PR/change/whatever to uncover it as it wouldn't be suppressed in the logs - although could potentially create a second variable (that was sensitive) with the same value/PAT to obsfucate this?).
The PAT would only be providing access to teardown 'organization' and any resources in it (assuming it is scoped and created correctly) so the likely problems resulting from this would be where people/public want to deliberately mess up resources within the instance to hinder/impact the build, integration environment (which would/could be "reset" during a build anyway).
This would also mean that the PRs couldn't run in parallel as they would all be running against the same instance (and there would have to be some lock mechanism in the build to prevent this).
This seems like it may be less work than the other options? Making a PAT, deliberately visable (even though it's a little work to get hold of and if it grants access to little of any use/value) does seem non-prefered.
Thoughts?
Also, I've just found the following Microsoft answer which might suggest that obtaining an API key via the API is not going to be an option (with reference to the option of installing a build server, copy of Azure DevOps Server)...
... which might suggest some form of automation via the GUI would be required to obtain an API key.
The integration tests are failing due to a lack of an Integration environment (Azure DevOps Services instance) and related API key.
In order to resolve the integration tests, the following variables need adding to the build/pipeline:
AzureDevOps.Integration.ApiUri
AzureDevOps.Integration.Pat
(set as sensitive variable)... and...
https://dev.azure.com/someOrganizationName/_apis/
) where the organization will be torn down and recreated each time (i.e. don't use this organization for anything you want to keep! 😁) ... there also has to be some consideration to ensure that multiple sets of integration tests can't run similtaniously against the same instance (not sure how that would be handled, initially - not sure if you want to create a new organization for every build? ... I think there is a limit).Originally posted by @SphenicPaul in https://github.com/dsccommunity/AzureDevOpsDsc/issues/7#issuecomment-766335577