Open rarkins opened 3 days ago
This GitHub Action example was autoclosed: https://github.com/ayushmanchhabra/vsx/pull/554
Related log:
{
"autoReplaceStringTemplate": "{{depName}}@{{#if newDigest}}{{newDigest}}{{#if newValue}} # {{newValue}}{{/if}}{{/if}}{{#unless newDigest}}{{newValue}}{{/unless}}",
"commitMessageTopic": "{{{depName}}} action",
"currentValue": "v4.0.2",
"currentVersion": "v4.0.2",
"currentVersionTimestamp": "2024-02-07T04:42:16.000Z",
"datasource": "github-tags",
"depName": "actions/setup-node",
"depType": "action",
"fixedVersion": "v4.0.2",
"packageName": "actions/setup-node",
"registryUrl": "https://github.com",
"replaceString": "actions/setup-node@v4.0.2",
"sourceUrl": "https://github.com/actions/setup-node",
"versioning": "docker",
"warnings": [],
"updates": [
{
"bucket": "major",
"newVersion": "v2.3.0",
"newValue": "v2.3.0",
"releaseTimestamp": "2021-07-20T12:22:13.000Z",
"newMajor": 2,
"newMinor": 3,
"newPatch": 0,
"updateType": "minor",
"branchName": "renovate/actions-setup-node-2.x"
}
],
"isSingleVersion": true
},
This GitHub Action example was not autoclosed yet: https://github.com/TWiStErRob/net.twisterrob.astro/pull/67
This appears to be a Docker downgrade: https://github.com/renovatebot/renovate/discussions/29855
Another GitHub Action example: https://github.com/etrias-nl/php-dev/pull/467
We are facing the same problem on our self-hosted instance. With dockerimages from dockerhub and our own gitlab registry. We havent noticed any downgrade before 06/22, but now we get ~1 per day.
Jun 24 02:25:32.862: {
Jun 24 02:25:32.862: "deps": [
Jun 24 02:25:32.862: {
Jun 24 02:25:32.862: "depName": "earthly/buildkitd",
Jun 24 02:25:32.862: "currentValue": "v0.8.14",
Jun 24 02:25:32.862: "currentDigest": "sha256:c6b989bb1280c04d26ce8d32567dacadd731f8e398a226eaf65d8fcf8ac06bc6",
Jun 24 02:25:32.862: "datasource": "docker",
Jun 24 02:25:32.862: "versioning": "docker",
Jun 24 02:25:32.862: "replaceString": "earthly/buildkitd:v0.8.14@sha256:c6b989bb1280c04d26ce8d32567dacadd731f8e398a226eaf65d8fcf8ac06bc6",
Jun 24 02:25:32.862: "updates": [
Jun 24 02:25:32.862: {
Jun 24 02:25:32.862: "bucket": "minor",
Jun 24 02:25:32.862: "newVersion": "v0.5.1",
Jun 24 02:25:32.862: "newValue": "v0.5.1",
Jun 24 02:25:32.862: "newMajor": 0,
Jun 24 02:25:32.862: "newMinor": 5,
Jun 24 02:25:32.862: "newPatch": 1,
Jun 24 02:25:32.862: "updateType": "patch",
Jun 24 02:25:32.862: "newDigest": "sha256:8a3a2f4d51f4ffa3a37da95341aa473d57e827ef7f22c2a447756eb5ca612e28",
Jun 24 02:25:32.862: "branchName": "renovate/patch-earthly"
Jun 24 02:25:32.862: }
Jun 24 02:25:32.862: ],
Jun 24 02:25:32.862: "packageName": "earthly/buildkitd",
Jun 24 02:25:32.862: "warnings": [],
Jun 24 02:25:32.862: "registryUrl": "https://index.docker.io",
Jun 24 02:25:32.862: "currentVersion": "v0.8.14",
Jun 24 02:25:32.862: "isSingleVersion": true,
Jun 24 02:25:32.862: "fixedVersion": "v0.8.14"
Jun 24 02:25:32.862: }
Jun 24 02:25:32.862: ],
Jun 24 02:25:32.862: "matchStrings": [
Jun 24 02:25:32.862: "(?<depName>[^ :\"]+?):(?<currentValue>[^ :@]+?)@(?<currentDigest>sha256:[a-f0-9]+)"
Jun 24 02:25:32.862: ],
Jun 24 02:25:32.862: "datasourceTemplate": "docker",
Jun 24 02:25:32.862: "versioningTemplate": "docker",
Jun 24 02:25:32.862: "packageFile": "hieradata/roles/gitlab_runner::earthly.yaml"
Jun 24 02:25:32.862: },
@zharinov now we know it's not related to the Mend app/cache specifically.
@MarcWort are you using repository or package caching when your self-host? e.g. disk-based or S3-based for repository cache, or disk-based or Redis-based for package cache?
are you using repository or package caching when your self-host? e.g. disk-based or S3-based for repository cache, or disk-based or Redis-based for package cache?
@rarkins I use the defaults with the renovate docker image. So just disk based package cache.
Debug logs from the buildkitd downgrade I mentioned:
Jun 24 02:25:20.827 DEBUG: getDigest(https://index.docker.io, earthly/buildkitd, v0.5.1) (repository=provid/puppet-control)
Jun 24 02:25:20.828 DEBUG: getManifestResponse(https://index.docker.io, earthly/buildkitd, v0.5.1, head) (repository=provid/puppet-control)
Jun 24 02:25:26.498 DEBUG: getDigest(https://index.docker.io, earthly/buildkitd, v0.8.14) (repository=provid/puppet-control)
Jun 24 02:25:26.498 DEBUG: getManifestResponse(https://index.docker.io, earthly/buildkitd, v0.8.14, head) (repository=provid/puppet-control)
Docker versioning hasn't changed in 7 months: https://github.com/renovatebot/renovate/tree/main/lib/modules/versioning/docker
Here's the recent changes to the common lookup logic: https://github.com/renovatebot/renovate/commits/main/lib/workers/repository/process/lookup
Can anyone narrow down the release range in which this would have started?
We're seeing a reasonably high number of these which autoclose themselves. Which is kind of good of course, but it also means it's related to some type of temporary data problem, which is harder to diagnose.
But it still decreases my confidence of ever using automerge with Renovate :s. I had to request to retry the PRs to have renovate figure out it wasn't needed. Does having the preset to separate minor major have something to do in this?
it's related to some type of temporary data problem, which is harder to diagnose.
Since we are self-hosted, I can see the requests made to our gitlab registry. Notice that the response is always the exact same number of bytes because nothing has changed. But that renovate did a downgrade at [27/Jun/2024:10:06:34 +0200]
192.168.110.217 - - [27/Jun/2024:02:36:58 +0200] "GET /v2/provid/kubernetes-extensions/tags/list?n=10000 HTTP/1.1" 401 188 "" "RenovateBot/37.419.1 (https://github.com/renovatebot/renovate)" -
192.168.110.217 - - [27/Jun/2024:02:37:04 +0200] "GET /v2/provid/kubernetes-extensions/tags/list?n=10000 HTTP/1.1" 200 1471 "" "RenovateBot/37.419.1 (https://github.com/renovatebot/renovate)" 4.64
192.168.110.217 - - [27/Jun/2024:10:06:33 +0200] "GET /v2/provid/kubernetes-extensions/tags/list?n=10000 HTTP/1.1" 401 188 "" "RenovateBot/37.420.1 (https://github.com/renovatebot/renovate)" -
192.168.110.217 - - [27/Jun/2024:10:06:34 +0200] "GET /v2/provid/kubernetes-extensions/tags/list?n=10000 HTTP/1.1" 200 1471 "" "RenovateBot/37.420.1 (https://github.com/renovatebot/renovate)" 4.64
192.168.110.217 - - [27/Jun/2024:13:02:12 +0200] "GET /v2/provid/kubernetes-extensions/tags/list?n=10000 HTTP/1.1" 401 188 "" "RenovateBot/37.420.1 (https://github.com/renovatebot/renovate)" -
192.168.110.217 - - [27/Jun/2024:13:02:18 +0200] "GET /v2/provid/kubernetes-extensions/tags/list?n=10000 HTTP/1.1" 200 1471 "" "RenovateBot/37.420.1 (https://github.com/renovatebot/renovate)" 4.64
192.168.110.217 - - [27/Jun/2024:15:53:55 +0200] "GET /v2/provid/kubernetes-extensions/tags/list?n=10000 HTTP/1.1" 401 188 "" "RenovateBot/37.420.1 (https://github.com/renovatebot/renovate)" -
192.168.110.217 - - [27/Jun/2024:15:54:01 +0200] "GET /v2/provid/kubernetes-extensions/tags/list?n=10000 HTTP/1.1" 200 1471 "" "RenovateBot/37.420.1 (https://github.com/renovatebot/renovate)" 4.64
Can anyone narrow down the release range in which this would have started?
This version was fine for us, so we moved back to this ghcr.io/renovatebot/renovate:37.409.1
This PR should hopefully avoid these happening, but not solve the root cause: #29921
@rarkins https://github.com/renovatebot/renovate/issues/29919#issuecomment-2196775181 is now auto-closed as well, after a 3 hour cycle ran Renovate again on the repo.
@TWiStErRob could you save the full log - or Job IDs - of the run where it was autoclosed and the one before it when it was created? I'd like to see if there's any helpful indicators
Yep, @rarkins I was looking at that, here are the files:
I was diffing T and T+3, a few observations:
@TWiStErRob thanks for the logs and detailed descriptions.
The T-3 one also has 1 GraphQL query too. The first GraphQL should be the initRepo() one, so it implies that no github tags/releases were queried (results came from cache).
This indicates that caching alone doesn't cause it, although doesn't rule out that something about caching may contribute to it. It just doesn't happen every time the result is from cache. From a quick code inspection I couldn't figure out which cache period applied here.
The fact that self-hosted users are also seeing this seems to imply:
And what about non-docker downgrades, with GitHub actions? I just got another one right now: https://github.com/oxsecurity/megalinter/pull/3715
GitHub actions have already been mentioned multiple times above. They use docker versioning.
Renamed the issue to be less "technically correct" (that it's limited to Docker versioning) so that it's less confusing for most.
It seems the problem is limited to Docker or GitHub Actions, both of which use versioning=docker
FYI the workaround will be deployed to the hosted app today
Cc @nabeelsaabna
Describe the proposed change(s).
Renovate in some cases is creating PRs to update dependencies where it's actually a downgrade. It appears to be isolated to
docker
versioning, which is used for the docker datasource and also for github actions.Discussion: #29901
Unfortunately we are not yet able to reproduce it