Open tr7zw opened 4 weeks ago
First of all, that's a first with this AI pr (also lmao about If you no longer want Latta AI to attempt fixing issues on your repository, you can block this account.
). But it actually seems correct from a glance.
For a bit of context, first had the EntityCulling deploy crashing halfway through: https://github.com/tr7zw/EntityCulling/actions/runs/11635994760/job/32406501958 so I manually uploaded the missing 15 or so files to Modrinth/Curseforge, but then at the same time 3d skin layers failed and only got 9 out of the 62? files published ( https://github.com/tr7zw/3d-Skin-Layers/actions/runs/11636424329/job/32407800763 ), and I really don't feel like publishing 53 files manually 😵💫.
That's a great idea!
Honestly, I hadn't thought of it before because usually the Mojang API has a pretty good uptime. The chances of running into issues with it seemed about as likely as a GitHub Runner losing internet connectivity during the publishing process (which, ironically, happened to me just a few days ago :D)
This is actually the first outage of this kind I've ever seen.
First of all, that's a first with this AI pr
Sigh. Just ignore this garbage (or, better yet, report it)
and I really don't feel like publishing 53 files manually
Yikes, that's unfortunate. At this point, it might be easier to delete the versions that have already been published and try again a little later, once Mojang gets their stuff together.
Right now the plan is to just make it version 1.7.0.1 and retry to publish it tomorrow. Deleting releases is a bit tricky, and I can't re-run the publish since mcpublish re-trying to publish these releases will cause issues too. (Actually, if it could just skip the publish if a file with 1:1 the same name/target version/version/mod loader exists, that would be really helpful to recover from these half-baked releases due to network related crashes)
Actually, if it could just skip the publish if a file with 1:1 the same name/target version/version/mod loader exists, that would be really helpful to recover from these half-baked releases due to network related crashes
I really do need to implement something like this to make recovery from these unfortunate situations easier. It certainly should be possible with Modrinth (god bless the Modrinth development team and their actually competent API implementation), though I do have some doubts about the CurseForge Upload API in that regard.
Yea it's tricky. Makes me with Github Actions had the same way of working as Jenkins. On a Jenkins I could just re-trigger the job with a modified build file(from the webinterface) where I comment out the tasks that are already done, so it just finishes the broken ones.
Yup, Modrinth has an endpoint to check if a file has already been uploaded to their servers, making it super simple to skip duplicate uploads in the event of re-runs.
CurseForge, on the other hand, doesn't offer a direct route for this, but it is possible to retrieve all the mod's files and then check their hashes:
"hashes": [
{
"value": "a0193dd17bf8a98868cd53edeffc144962770c95",
"algo": 1
},
{
"value": "42020dbc2c86e93f7f8e2d28a349c4a4",
"algo": 2
}
]
It seeems that "algo": 1
represents SHA-1, which is also used by Modrinth, so at least that's good.
On a Jenkins I could just re-trigger the job with a modified build file(from the webinterface) where I comment out the tasks that are already done, so it just finishes the broken ones.
Like for real! It would be a lifesaver in so many situations. GitHub already has an option to rerun only failed jobs, but nothing for rerunning just the failed steps (and the steps they depend on). They could even pair this with some fancy-pants interface that lets us select the specific steps that need to be rerun, so we wouldn't have to manually edit the workflow file to get it done. But oh well, maybe in another 10 years they'll sort that out :D
Maybe related, had piston fail last night again, thankfully on the first one. Now retried, and it failed halfway here: Shouldn't that have a retry as well?
Damn, these server unavailability problems have certainly become much more common than they were back in the ol' days. I guess I need to create a retry middleware for my fetch
implementation and set it up for basically every API client.
That would be amazing, bestcase before the 1.21.4 release wave starts 😅.
I don't think I'm going to make it, to be quite frankly honest with you! The issue is that I've been transitioning mc-publish away from its current distribution model (due to GitHub's unhinged handling of Node runners) and moving away from Node.js in general for quite some time now, while simultaneously cleaning up a backlog of bug reports and feature requests. As a result, my local codebase now differs significantly from the current upstream state of the project, which makes it pretty difficult to rebase and backport features to v3.x
in the meantime.
So, yeah, with that in mind, I don't think I'll be able to finish everything before 1.21.4 drops. On the bright side, though, 1.22 should (hopefully) be a smooth sailing after all that! :D
Description
When Mojangs metadata server is having issues, the entire CI might randomly fail because there is no retry logic.
Expected Behavior
Re-attempt a few times to load the json, same as when the Modrinth/CF upload fails due to API/S3 errors.
Actual Behavior
(That is the entire log, there is no reason provided)
Version
3.3.0
Environment
GitHub Actions
Configuration
No response
Logs
No response
Other Information
No response