Closed tycrek closed 9 months ago
Agh. Nevermind, a lookup of myself was immediately ratelimited by Mojang through PlayerDB. Very odd.
PlayerDB also returns
{"message":"Mojang has rate limited this request.","code":"minecraft.rate_limited","data":{},"success":false,"error":true}
@astei can you check if your repo doesn't generate too much movement? All cloudflare workers are getting rate limitted since some time.
The Mojang rate limit for Cloudflare Workers is a long known bugbear. We’d have to basically host profile fetching outside of Cloudflare Workers, which isn’t great.
My IRL job keeps me further and further away from Minecraft and I’ve had limited internet access for the past few days.
On Tue, Dec 13, 2022, at 7:24 PM, andreasdc wrote:
PlayerDB also returns
{"message":"Mojang has rate limited this request.","code":"minecraft.rate_limited","data":{},"success":false,"error":true}
@astei https://github.com/astei can you check if your repo doesn't generate too much movement? All cloudflare workers are getting rate limitted since some time.— Reply to this email directly, view it on GitHub https://github.com/astei/crafthead/issues/68#issuecomment-1350159512, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD5MX5AL7UO5FRIS5QCEBIDWNEHUFANCNFSM6AAAAAAS5PHLBY. You are receiving this because you were mentioned.Message ID: @.***>
Sounds cool, it's good to find new perspectives. Nowadays all workers are getting rate limited from mojang globally at the same time, it was non stop for almost 24 hours, maybe you see bigger movement or something like that. I heard that on cloudflare you can setup ratelimit too.
We've implemented a solution on PlayerDB that effectively uses some servers outside of Cloudflare to fetch profiles when running into 429s. That might be a good solution here now.
Last time I checked PlayerDB had rate limits too when cloudflare was down.
Last time I checked PlayerDB had rate limits too
Yep, this was a recent change we made before the holidays to workaround the increasing number of 429s we were seeing from Mojang. Outside of rate limits enforced by Mojang, we don't enforce any rate limits ourselves.
What do you mean? I'm saying that PlayerDB was rate limited from mojang when cf was down.
I don't have a full solution, but a potential workaround for users who are able to host outside of Workers, you can directly interface with Mojang's API and with proper caching, should avoid ratelimits. You could probably adapt this to be similar to what Cherry alluded to.
I've adapted my code from my project to a simple skin downloader using Axios and Sharp. Hopefully this helps someone else or at least saves some time:
// axios can easily be replace with fetch or node-fetch
const axios = require('axios');
// image manipulation library. Can be replaced with Jimp if you prefer.
const sharp = require('sharp');
/**
* Mojang API endpoints
*/
const MOJANG_API = {
/**
* Accepts either a username or UUID and returns both from Mojang
*/
UUID: 'https://api.mojang.com/users/profiles/minecraft/',
/**
* Only accepts a UUID. Returns the profile data, containing the URL of the skin
*/
SKIN: 'https://sessionserver.mojang.com/session/minecraft/profile/'
};
/**
* Gets a player skin using Mojang API's
*/
const getSkin = (username) => new Promise((resolve, reject) =>
// Get the UUID from the username
axios.get(MOJANG_API.UUID.concat(username))
.then((uuidResponse) => {
// If code is HTTP 204, username is not valid
if (uuidResponse.status === 204) throw new Error('Username not found');
// Get the players profile via the UUID
return axios.get(MOJANG_API.SKIN.concat(uuidResponse.data.id));
})
// Why does Mojank put encoded JSON *inside yet another JSON*? who knows
.then((profileResponse) => Buffer.from(profileResponse.data.properties[0].value, 'base64').toString('ascii'))
// Download the actual skin
.then((buffer) => axios.get(JSON.parse(buffer).textures.SKIN.url, { responseType: 'arraybuffer' }))
.then((imageResponse) => sharp(Buffer.from(imageResponse.data, 'base64')))
.then(resolve)
.catch(reject));
if (process.argv.length < 3) {
console.log('Please provide a username');
process.exit(1);
} else getSkin(process.argv[2])
.then((img) => img.toFile('skin.png'))
.then(() => console.log('Image saved'))
.catch(console.error)
If you need to manipulate skins or reconstruct avatars, you can do it with Sharp, it's just a bit of a pain in the ass to figure out and probably adds a ridiculous amount of overhead.
Unfortunately this seems to be the end of the road for this problem, at least until Mojang acknowledges the issue.
What's the current cache strategy? I'm getting 429 on stuff that was fetched earlier and should be cached, or at least fallback to cached if the request is rate-limited.
Two strategies I can see:
I don't have a direct alternative to Crafthead's utility routes, but I do have an alternative UUID/skin lookup service for Mojang's API: https://github.com/tycrek/mulv
I'm hosting it via Vercel Serverless Functions on https://mulv.tycrek.dev but please consider hosting yourself as I'm only on the free tier and need it for my own uses as well.
So far in my testing I haven't hit any HTTP 429
on Mojang's API, but you never know what'll happen.
This should now be fixed with the latest change. We now go through PlayerDB to grab profile information. Feel free to open up a new issue if you run into issues.
I noticed my site is failing to load resources, causing my users to receive an HTTP 500 (generates a pack based off the user skin). What I believe is happening is Crafthead is running into HTTP 429 (Too Many Requests) against the Mojang UUID/Username API.
One possible fix may be to use PlayerDB.co instead of the Mojang API used here
https://github.com/astei/crafthead/blob/b00709697fea5f84337aef4bf28e095aed4a94f9/worker/services/mojang/api.ts#L101
It is also sponsored by Nodecraft, same as Crafthead ;)