Closed szepeviktor closed 1 year ago
It looks like it affects multiple (to say the least) users :)
For a workaround, is anyone aware how to pin version used in a workflow to an earlier release?
up: I too can confirm that changing to checkout@v4 works.
Does this have anything to do with the v4.0 release from @takost & @simonbaird?
Seems to be a transient network-related issue, based on previous instances.
I'm trying to think how this even broke. It must have been on the server side, because we are using actions/checkout@v3
, so we should not be affected by the recent v4.0.0 tag.
actions/checkout@v4
- v4 works. It can be used as a quick fix.
The second re-run (third attempt) worked for me: https://github.com/DawnbrandBots/bastion-bot/actions/runs/6074117346 I think it's not related to specifically this repository but with GitHub itself serving the tarball.
Seems to be a transient network-related issue, based on previous instances.
Interesting - network reliability aside, I wonder why actions/checkout isn't retrying on network errors.
Confirmed, updating to v4 fixes the issue.
Oh no... We need this to work today :#
EDIT: No I cannot just change to v4
for a production environment that needs to go live soon.
Interesting - network reliability aside, I wonder why actions/checkout isn't retrying on network errors.
It's not actions/checkout — it's the Actions runner itself downloading and extracting all the used action packages before starting the workflow.
same issue here
Seems like it's back working for me now anyways.
same issue here
+1 Here, getting the error now. Is upgrading to v4 A) the best option and B) non-breaking ?
working again for us!
GitHub Status says ✅ All Systems Operational
Interesting - network reliability aside, I wonder why actions/checkout isn't retrying on network errors.
It's not actions/checkout — it's the Actions runner itself downloading and extracting all the used action packages before starting the workflow.
Yeah, this seems unrelated to the checkout action and more an issue on the Runner side.
Changing actions/checkout from v3 to v4 is not a fix, I'm getting random outcomes with either tags, some jobs work other fail.
Confirmed, updating to v4 fixes the issue.
confirmed as well
Seems like it's back to normal
I can also confirm that the issue did/does exist.
I can't speak for the v4 upgrade being a fix, but doing github "Retry Failed Jobs" a few times eventually allows the job to work (even while keeping the v3 tag). ✨
Still inconsistent/flaky for us, seems to happen for majority of runs.
We have the problem inside matrix builds, but not in our linting job which does not use the matrix declaration
Still inconsistent for me also
It looks re-running job many times may fix the problem.
actions/checkout@v4
helps me. But is it safety to use?
Looks like intermittent as a couple of rerun seems to make it work
Upgrading to V4 is not fixing the issue. Most likely some error in Github Runners images caused the issue.
same issue here, (edit: v4 worked for us)
Looks like intermittent as a couple of rerun seems to make it work
Intermittent, but failing more often then succeeding.
Failing every single run in the past 1h
GitHub Actions probably relies on https://api.github.com/repos/XXX/XXX/tarball/XXX
, which is what is failing. It doesn't just fail in GitHub Actions, but I ran into error: unable to download 'https://api.github.com/repos/NixOS/nixpkgs/zipball/57695599bdc4f7bfe5d28cfa23f14b3d8bdf8a5f': HTTP error 500
as well in Nix.
Party on :partying_face:
Updating to v4 is not a solution for companies that have hundreds of workflows and reusable workflows build around the v3 version.
Breaking changes out of the blue are not as easy as "upsy, update to v4"
it looks like cache propagation issue, depending on where you try following from:
wget https://api.github.com/repos/actions/checkout/tarball/f43a0e5ff2bd294095638e18286ca9a3d1956744
tar -xzf f43a0e5ff2bd294095638e18286ca9a3d1956744
you will either get files, or broken archive, i would guess no action from users, nor upgrade is needed :)
Now tracked at https://www.githubstatus.com/incidents/76xp2jd3px64
I am also facing the same issue, we are blocked
This has been updated right now so I guess we have to wait:
While we wait: is anyone watching something interesting?
Ted Lasso!
@hluaces Watching Black Mirror for the first time on Netflix at the moment; most episodes are very interestin, quite thought-provoking, and tech-related. But this issue is likely fixed after 1 ep :P
The GitHub response time for updating incidents is really slow, this has been happening for over an hour now. This is not an acceptable SLA.
The GitHub response time for updating incidents is really slow, this has been happening for over an hour now. This is not an acceptable SLA.
%100 correct.
The GitHub response time for updating incidents is really slow, this has been happening for over an hour now. This is not an acceptable SLA.
This is standard Github practice these days. We experience issues with deploying our code at least once every month. I never deploy without looking at https://www.githubstatus.com/ anymore. This is sad.
Upgrading to v4 did work for our CI
Same issue here of course. Because of the way we use this action, I now have hundreds of pipelines broken using v3 and potentially 100s of developers blocked 🤦♂️
While we wait: is anyone watching something interesting?
Yes, I'm watching all the panic'd updates to workflows where they've realised they only wrote the happy path and jobs weren't failing properly.
The problem is intermittent, the move to v4 could be totally anecdotal. I just kept re-running the job and sometimes they succeed.
move to v4 could be totally anecdotal
v4 is using node 20 by default so everybody updating should take that into account. Most tests will pass, but deployments - not so sure.
I'll wait.
The incident has been resolved. 🤔
That hash points to v3.6.0
I do not know what to add here. https://github.com/nunomaduro/larastan/actions/runs/6074130772/job/16477441571