Closed Samuel-Therrien-Beslogic closed 4 months ago
yes I've been slowly getting around to debugging this. it's specific to your patch for some reason and I haven't gotten to the bottom of it.
fwiw it isn't affecting anyone else so it's been lower priority
if possible please limit the commits made to that branch since it makes it more difficult to debug this problem when you're making a moving target :)
it's specific to your patch for some reason and I haven't gotten to the bottom of it.
fwiw it isn't affecting anyone else so it's been lower priority
Awesome! Good to know.
if possible please limit the commits made to that branch since it makes it more difficult to debug this problem when you're making a moving target :)
Certainly. Knowing you're on the case and there's nothing else to do but wait, I'll refrain from pushing anything to that branch. (it's not like it breaks my existing CI workflow anyway)
didn't get too far on this today -- locally it consumes a ton of ram (multiplied by n cpus) and then errors on some config file. that should still time out on pcci and be handled already but something is happening differently there before the run even starts from the logs. will have to dive deeper on an equivalent vm
I think I fixed this, or at least caused the runs to no longer be lost to the ether -- I believe the run of eslint
was consuming all the memory and swapping forever. I've put better memory limits in place
can you rebase your branch? (it seems to be conflicting now preventing me from rerunning your change)
yep looks like that was the problem! now hitting OOM -- something in your linting pipeline is consuming a lot of memory: https://results.pre-commit.ci/run/github/771018559/1722215256.4fiKHgsGSZiOTrcMFSTEZg
Thanks! Now I can investigate what's making eslint consume so much memory during the run (at least that sounds like something I can maybe investigate locally w/o having to re-run pre-commit.ci everytime)
Hi! I was working on this https://github.com/BesLogic/releaf-canopeum/pull/175 , iteratively working through which dependencies I had to explicitly specify, which dependencies I could omit, and figuring out what I had to trim and make optional from our shared configs to fit under the 250MiB restriction (thanks npm/node_modules).
Everything was going well until suddenly the pre-commit job got stuck in queue. I thought this could be some caching issue, so I opened a new PR, on a new branch, with re-written git history (squash) here https://github.com/BesLogic/releaf-canopeum/pull/177 w/o success.
I searched this repo for similar issues, I found
Here's the repo jobs: https://results.[pre-commit.ci](https://results.pre-commit.ci/repo/github/771018559)/repo/github/771018559 I have no information on why the jobs could be stuck in queue. Is one of my dependency problematic? Is there an issue on pre-commit.ci's side. Did I hit some sort of throttling? (I'd be fine with that if I just hit some weekly/monthly quota). None of my other repos are having issues.
Here's my config atm: