Open AArnott opened 1 month ago
Triage: @AArnott does that feed exist and does it work with dotnet restore but not with workloads? Can you try adding --ignore-failed-sources? Something is not configured right with that feed and my guess is that the error case is not failing fast?
@marcpopMSFT Yes, the feed exists, and the dotnet restore
step worked successfully before dotnet workload restore
runs.
Can you try adding --ignore-failed-sources?
My nuget.config only has one source specified, with <clear/>
above that to prevent any ambient source from being added. I also set NUGET_PACKAGES
to the agent temp dir so that it's fully restore from scratch on every run of the pipeline, so I'm certain that restore works (i.e. actually downloads packages), but workload restore fails.
@nkolev92 @aortiz-msft are there logs we can collect to see the time spent in nuget for this? Have you heard of any discrepancy between nuget.org and AzDO?
Since we're talking about dotnet workload restore, I'd need to get some hints to that code so we can figure out which logs might be available. All NuGet remote calls end up calling: HttpRetryHandler which will tell you long it takes.
Normally that would be on informational verbosity, assuming the logger is passed down.
Have you heard of any discrepancy between nuget.org and AzDO?
I'd generally expect nuget.org to be faster than AzDO. Auth is one starting point. nuget.org is optimized for parallel downloads, with static blobs and CDNs, while AzDO does more compute, so it's not unusual for nuget.org to be noticeably faster than AzDO.
I don't know if it fully explains the 25s to 4 mins jump, as that'd be driven by the number of calls made, packages download and their size.
The logs posted here being full of lines saying the service index wasn't loaded are probably what I'd look into first, why there's so many failures fetching the service index resource.
Another thing I'd look into is whether the SourceRepository object is shared across all of these workload calls.
Normally that would be on informational verbosity, assuming the logger is passed down.
Probably not. Most msbuild args aren't copied to the inner invocation of msbuild, sadly.
@AArnott you could set the msbuild args as properties and they should get picked up by the inner invocation. Let us know if you have more info for us to dig into based on nkolev92's reply. We probably need to take a look at improving how we report nuget errors and data since we're a pass through in a lot of cases and it's hard to track down.
also, you say you're set up for auth. Do you get the unable to load message on all attempts to hit AzDO? I wonder if auth is not configured correctly (reminder that secure feeds don't work well with workloads on mac in particular because of limitations of the credential manager extension)
@AArnott any update whether you were seeting the unable to load message whenever hitting AzDO? Can you try enabling nuget logging and try on windows? NUGET_PLUGIN_ENABLE_LOG set to true and NUGET_PLUGIN_LOG_DIRECTORY_PATH to an available path
you could set the msbuild args as properties and they should get picked up by the inner invocation
I'm not familiar with how to do this. I have filed other bugs about dotnet workload restore
not behaving like msbuild and the response has been that it being msbuild based is an implementation detail and thus I shouldn't expect to be able to control it (e.g. https://github.com/dotnet/sdk/issues/42485).
I'm happy to try setting some properties though if it'll help.
Do you get the unable to load message on all attempts to hit AzDO?
Definitely not. Ultimately it works, as attested to by the fact that the build succeeds, and I'm on agents that have no local nuget cache to begin with, so nuget packages (and presumably workloads) must have been downloaded (from AzDO).
Can you try enabling nuget logging and try on windows? NUGET_PLUGIN_ENABLE_LOG set to true and NUGET_PLUGIN_LOG_DIRECTORY_PATH to an available path
Yes, I'll try that and get back to you.
@nkolev92 what should we be looking for in those logs?
I quickly looked at the plugin logs focusing on the timings, to see if acquiring a token is taking long or anything like that. All of the token acquisitions return very quickly, so auth isn't the thing that's taking too long here.
@nkolev92 so how do we determine what is taking the time? If it's not in the nuget logs, where might it be? Is the time on our side and we need our own perf trace from the SDK?
We haven't looked at NuGet specific logs yet, so I think we need to go back to the original idea and run
dotnet workload restore --verbosity diag
and see those logs.
When I run dotnet workload restore --verbosity diag locally, I do see see a lot of log messages:
[NuGet Manager] [Info] GET https://pkgs.dev.azure.com/dnceng/9ee6d478-d288-47f7-aacc-f6e6d082ae6d/_packaging/45bacae2-5efb-47c8-91e5-8ec20c22b4f8/nuget/v3/flat2/microsoft.net.sdk.aspire.manifest-8.0.100.msi.x64/8.2.2/microsoft.net.sdk.aspire.manifest-8.0.100.msi.x64.8.2.2.nupkg [NuGet Manager] [Info] OK https://pkgs.dev.azure.com/dnceng/9ee6d478-d288-47f7-aacc-f6e6d082ae6d/_packaging/45bacae2-5efb-47c8-91e5-8ec20c22b4f8/nuget/v3/flat2/microsoft.net.workload.mono.toolchain.net6.manifest-9.0.100-rc.2.msi.x64/index.json 127ms
Each HTTP request will be logged and it'll tell us if some are taking longer.
@AArnott can you try that?
Describe the bug
Frequent pipeline failures due to nuget.org not responding to package restore requests drives me to consider using a private Azure Artifacts feed instead of nuget.org in nuget.config.
But this change has an unexpected side-effect: the time to run
dotnet workload restore
jumps extremely:The macOS agent is the only one to display the following errors, which it does only after the change:
Eventually the work completes (successfully I guess, because the rest of the build succeeds.)
This is repeatable. And the macOS agent has the authentication necessary to pull from the private feed.
Further technical details