Closed NicolasMassart closed 4 years ago
This is not an issue at the markdown-link-check level but at the link-check library level.
Indeed, the retry implemented in link-check expects non standard retry format of the form "1m30s" despite the standard being clear about the value being in seconds. A side effect is that even if the implementation detects that the duration is a number, the use of the ms
library converts it in milliseconds without change. So 60 seconds retry delay becomes 60 milliseconds and of course fails.
This may interest you to know that link-check dependency fix is released : https://github.com/tcort/link-check/blob/master/CHANGELOG.md#version-452
See also #123 for the pending update of markdown-link-check.
Could make some tests to confirm that it works now? (We have unit tests and I tested on my CI that was always failing and now it's fine, but still I'd like to make sure)
You just have to update your package.json file to point to the master branch on this repos until we create a new release.
Do that by running npm install --save tcort/markdown-link-check
in your project root. It will update the package.json for you.
Then of course go back to the NPM repos released version once we publish it 😉
Thanks!
https://github.com/nodejs/community-committee/pull/640 https://github.com/gaurav-nelson/github-action-markdown-link-check/issues/73 https://github.com/chaos-mesh/chaos-mesh/issues/998
The issue
106 introduced a retry on HTTP 429 code meaning "retry later" as many links on Github experienced this issues.
But even if it mostly works, sometimes it's obviously not enough.
Example screenshot of a CI pipeline that needed to be retried twice before being all green on link checking. Without changing any thing between the runs of course. The matching commit is https://github.com/ConsenSys/doc.goquorum/commit/f3eb2cae1ac729f5ab772b9927fbe7aa9884c799
The possible causes
It may be because of too many links tested and Github increases the expected retry delay value then when retry is done on our side, it may be too soon as other requests made the value increase. There's some research to do here.
Options
One idea it raises is that, as other open issues (#40 #111), having a broader computation that just links taken one by one could make this improve. Having a context where we know if we have other links waiting for retry and what the latest duration value is for a specific domain may help. But first we have to clarify what the issue is exactly. It's not easy to test as it's not something we can reproduce locally when targeting Github links (429 only happens when ran from CI) but we can try and perhaps ask Github directly.