actions / checkout

Action for checking out a repo
https://github.com/features/actions
MIT License
5.93k stars 1.75k forks source link

Can't use 'tar -xzf' extract archive file #1448

Closed szepeviktor closed 1 year ago

szepeviktor commented 1 year ago
Download action repository 'actions/checkout@v3' (SHA:f43a0e5ff2bd294095638e18286ca9a3d1956744)
Error: Can't use 'tar -xzf' extract archive file: /home/runner/work/_actions/_temp_81e514cc-59b2-4af3-aa4d-6f8210ffde88/752d468e-f00d-474c-af3f-80fbe27b5318.tar.gz. return code: 2.

That hash points to v3.6.0

I do not know what to add here. https://github.com/nunomaduro/larastan/actions/runs/6074130772/job/16477441571

shapirus commented 1 year ago

It looks like it affects multiple (to say the least) users :)

For a workaround, is anyone aware how to pin version used in a workflow to an earlier release?

up: I too can confirm that changing to checkout@v4 works.

jamesmhaley commented 1 year ago

Does this have anything to do with the v4.0 release from @takost & @simonbaird?

kevinlul commented 1 year ago

Seems to be a transient network-related issue, based on previous instances.

514 #545 #672 #673 #815 #948 #949

mvdan commented 1 year ago

I'm trying to think how this even broke. It must have been on the server side, because we are using actions/checkout@v3, so we should not be affected by the recent v4.0.0 tag.

ovr commented 1 year ago

actions/checkout@v4 - v4 works. It can be used as a quick fix.

kevinlul commented 1 year ago

The second re-run (third attempt) worked for me: https://github.com/DawnbrandBots/bastion-bot/actions/runs/6074117346 I think it's not related to specifically this repository but with GitHub itself serving the tarball.

mvdan commented 1 year ago

Seems to be a transient network-related issue, based on previous instances.

Interesting - network reliability aside, I wonder why actions/checkout isn't retrying on network errors.

andraspatka-dev commented 1 year ago

Confirmed, updating to v4 fixes the issue.

pimhakkertgc commented 1 year ago

Oh no... We need this to work today :#

EDIT: No I cannot just change to v4 for a production environment that needs to go live soon.

kevinlul commented 1 year ago

Interesting - network reliability aside, I wonder why actions/checkout isn't retrying on network errors.

It's not actions/checkout — it's the Actions runner itself downloading and extracting all the used action packages before starting the workflow.

msandu62 commented 1 year ago

same issue here

robmaxwellirl commented 1 year ago

Seems like it's back working for me now anyways.

iedu61 commented 1 year ago

same issue here

chrisella commented 1 year ago

+1 Here, getting the error now. Is upgrading to v4 A) the best option and B) non-breaking ?

candh commented 1 year ago

working again for us!

szepeviktor commented 1 year ago

GitHub Status says ✅ All Systems Operational

ananni13 commented 1 year ago

Interesting - network reliability aside, I wonder why actions/checkout isn't retrying on network errors.

It's not actions/checkout — it's the Actions runner itself downloading and extracting all the used action packages before starting the workflow.

Yeah, this seems unrelated to the checkout action and more an issue on the Runner side.

Changing actions/checkout from v3 to v4 is not a fix, I'm getting random outcomes with either tags, some jobs work other fail.

janeusz2000 commented 1 year ago

Confirmed, updating to v4 fixes the issue.

confirmed as well

gileorn commented 1 year ago

Seems like it's back to normal

shealavington commented 1 year ago

I can also confirm that the issue did/does exist.

I can't speak for the v4 upgrade being a fix, but doing github "Retry Failed Jobs" a few times eventually allows the job to work (even while keeping the v3 tag). ✨

wiledal commented 1 year ago

Still inconsistent/flaky for us, seems to happen for majority of runs.

kevin-studer commented 1 year ago

We have the problem inside matrix builds, but not in our linting job which does not use the matrix declaration

dvirsegev commented 1 year ago

Still inconsistent for me also

anatawa12 commented 1 year ago

It looks re-running job many times may fix the problem.

soltanoff commented 1 year ago

actions/checkout@v4 helps me. But is it safety to use?

acollado2-cambridge commented 1 year ago

Looks like intermittent as a couple of rerun seems to make it work

k2589 commented 1 year ago

Upgrading to V4 is not fixing the issue. Most likely some error in Github Runners images caused the issue.

gkech commented 1 year ago

same issue here, (edit: v4 worked for us)

easimon commented 1 year ago

Looks like intermittent as a couple of rerun seems to make it work

Intermittent, but failing more often then succeeding.

catalinmer commented 1 year ago

Failing every single run in the past 1h

bobvanderlinden commented 1 year ago

GitHub Actions probably relies on https://api.github.com/repos/XXX/XXX/tarball/XXX, which is what is failing. It doesn't just fail in GitHub Actions, but I ran into error: unable to download 'https://api.github.com/repos/NixOS/nixpkgs/zipball/57695599bdc4f7bfe5d28cfa23f14b3d8bdf8a5f': HTTP error 500 as well in Nix.

bahaZrelli commented 1 year ago

image

pinkforest commented 1 year ago

Party on :partying_face:

ricardoboss commented 1 year ago

https://xkcd.com/1168/

merinofg commented 1 year ago

Updating to v4 is not a solution for companies that have hundreds of workflows and reusable workflows build around the v3 version.

Breaking changes out of the blue are not as easy as "upsy, update to v4"

ervin-pactum commented 1 year ago

it looks like cache propagation issue, depending on where you try following from:

wget https://api.github.com/repos/actions/checkout/tarball/f43a0e5ff2bd294095638e18286ca9a3d1956744
tar -xzf f43a0e5ff2bd294095638e18286ca9a3d1956744

you will either get files, or broken archive, i would guess no action from users, nor upgrade is needed :)

hugovk commented 1 year ago

Now tracked at https://www.githubstatus.com/incidents/76xp2jd3px64

ernitingarg commented 1 year ago

I am also facing the same issue, we are blocked

image

hluaces commented 1 year ago

This has been updated right now so I guess we have to wait:

image

hluaces commented 1 year ago

While we wait: is anyone watching something interesting?

kurtislamb commented 1 year ago

Ted Lasso!

Vanilagy commented 1 year ago

@hluaces Watching Black Mirror for the first time on Netflix at the moment; most episodes are very interestin, quite thought-provoking, and tech-related. But this issue is likely fixed after 1 ep :P

yardensade commented 1 year ago

The GitHub response time for updating incidents is really slow, this has been happening for over an hour now. This is not an acceptable SLA.

xerdink commented 1 year ago

The GitHub response time for updating incidents is really slow, this has been happening for over an hour now. This is not an acceptable SLA.

%100 correct.

pimhakkertgc commented 1 year ago

The GitHub response time for updating incidents is really slow, this has been happening for over an hour now. This is not an acceptable SLA.

This is standard Github practice these days. We experience issues with deploying our code at least once every month. I never deploy without looking at https://www.githubstatus.com/ anymore. This is sad.

olivier-zenchef commented 1 year ago

Upgrading to v4 did work for our CI

relloyd commented 1 year ago

Same issue here of course. Because of the way we use this action, I now have hundreds of pipelines broken using v3 and potentially 100s of developers blocked 🤦‍♂️

antm-pp commented 1 year ago

While we wait: is anyone watching something interesting?

Yes, I'm watching all the panic'd updates to workflows where they've realised they only wrote the happy path and jobs weren't failing properly.

The problem is intermittent, the move to v4 could be totally anecdotal. I just kept re-running the job and sometimes they succeed.

marcinciarka commented 1 year ago

move to v4 could be totally anecdotal

v4 is using node 20 by default so everybody updating should take that into account. Most tests will pass, but deployments - not so sure.

I'll wait.

flaxel commented 1 year ago

The incident has been resolved. 🤔