Open AlexeyPechnikov opened 1 year ago
Hi @mobigroup ! Thank you for flagging this issue. The logs linked are not available anymore. Is this issue still happening?
@AvaStancu , this issue is still happening from the private org repos from what I can see
Could we remove the awaiting-customer-reponse label from the ticket as we can still confirm the issue is still not resolved.
This is a considerable issue in our self-hosted runners. Is there any update on this?
Hi @thomasguerneyiag,
cache
hangs like that? (Worker_20230412-HHmmSS-utc.log
)We've got a very similar issue in our workflows. We use setup-go@v4
(in this workflow) that sometimes stales on cache download percentage that's near 100% (example run - the job was running on a hosted ubuntu-latest
runner).
According to my quick investigation, it seems that the issue may probably lie somewhere in downloadCacheStorageSDK
function from actions/toolkit
. I assume that because:
setup-go
action uses @actions/cache
's restoreCache
restoreCache
calls cacheHttpClient.downloadCache
that may call either downloadCacheStorageSDK
or downloadCacheHttpClient
, but in our case StorageSDK
variant is most probably used because only it uses the DownloadProgress
with the output we're seeing.@fhammerl I'm not sure if that's reproducible, for us it happens quite randomly. It doesn't resolve eventually - it stales forever until the runner timeout is triggered and fails the job. I attach the logs from our failing job.
I am seeing this as well, it is a major issue in my workflow, the primary reason why it is very unreliable.
We are seeing this issue as well at random times.
actions/cache
introduced a timeout (defaulting to 10min) in version 3.0.8
, see the documentation.
Still saw this with a github managed runner ubuntu-20.04.6 and actions/cache@v4
.
The same log line repeating indefinitely for 40+ minutes:
Received 1653791515 of 1662180123 (99.5%)
This is the link to the job: https://github.com/mobigroup/gmtsar/actions/runs/3095306955/jobs/5009583727