Open adrianinsaval opened 1 year ago
third time: https://github.com/adrianinsaval/pacman-repo/actions/runs/4282485820/jobs/7456820854 how to know where it hangs? is it happening only to me?
I can't really reproduce this problem on my own repositories. I wonder what the difference is.
this keeps happening every now and then seemingly at random
Would you like to try to research this in depth? I'm not sure where the timeout is coming from and without a proper reproducible test case it's going to be rather hard. We already have retries but it doesn't seem to trigger for you error case.
We are experiencing this too - https://github.com/flanksource/karina/actions/runs/4729694790/jobs/8392508080 and https://github.com/flanksource/karina/actions/runs/4363985067/jobs/7630860313 for example. Not sure what causes it - it is random and doesn't affect a specific artefact reliably.
Would you like to try to research this in depth? I'm not sure where the timeout is coming from and without a proper reproducible test case it's going to be rather hard. We already have retries but it doesn't seem to trigger for you error case.
what could I do to research this?
@adrianinsaval I'd do this in your case: Clone the action and add a ton of debug messages. Change your workflow to use your fork. Run the workflow in a loop to try to eventually hit the timeout. Check which of your debug messages came through and then fix the code accordingly.
This is tedious but I don't really know of a better approach. We might have to configure some connect timeouts and retry on timeout or something like that.
has happened twice now: https://github.com/adrianinsaval/pacman-repo/actions/runs/4262521615/jobs/7418125145 https://github.com/adrianinsaval/pacman-repo/actions/runs/4242812930/attempts/1