Closed matthewfeickert closed 3 years ago
very strange, timings do look elevated today according to my metrics -- let me look into whether something changed
I doubt this matters much, but the 3 shown timeouts above are happening at different stages:
@asottile retriggering the run from the GitHub comments has things passing now (after a long queue time): https://results.pre-commit.ci/run/github/118789569/1619557729.aLz8qBFvTBiIxfXakvechw
yeah the queue makes sense, I was kicking off a bunch of runs at the same time while the hosts were cycling.
there were no code changes during the period that led to higher timeouts, I suspect one of the hosts got a noisy neighbor in aws:
I'll be putting in some automated alerts to catch this particular failure mode in the future -- thanks for the report!
I'm going to send a message to the mailing list to make sure others know about this and follow up with a postmortem once I'm comfortable that it is resolved
I'll be watching this closely over the next couple of hours to make sure that fixed it
I'll also be sending out a postmortem entry to the mailing list
Awesome. :) Many thanks for this report and also for being :zap: fast in your feedback and help!
marking this all clear, run times have returned to normal after mitigation
postmortem
root cause
unknown
what went well
what didn't go well
follow-up
Hello, just curious is there is a way how to extend the timeout? https://results.pre-commit.ci/run/github/145693916/1622449847.gyQnW8ktQPCcmngXvQmabA
:wave: Hi.
pre-commit.ci
is failing by timeout for PR https://github.com/scikit-hep/pyhf/pull/1403 when it passes locally in a fresh virtual environment (and also passes inpre-commit.ci
but then timesout).c.f. https://results.pre-commit.ci/repo/github/118789569
and for a particular failing run
This is probably just transitory issue, but thought I'd still report it.
cc @lukasheinrich @kratsg
Also example of my claim that
pre-commit
passes locally: