Open onei opened 9 years ago
MediaWiki has caching wrappers as well, notably wfGetCache, which provides access to BagOStuff (see http://www.mediawiki.org/wiki/Memcached#Using_memcached_in_your_code for a slightly outdated example). Throttling for an API is easy, because we could have something similar to maxlag, but dealing with parser functions is much trickier and I've yet to figure a solution for that.
AbuseFilter implements a throttling mechanism using memcached. One possible way of handling the limit would be to return an error code for hiscores indicating throttle has been reached, then schedule a time delayed update of the page, though that is complicated. The alternative would be what you propose, which might would be better.
Maybe a better approach is not to throttle the requests but record when the requests start timing out (error code C28). When one is encountered, stop all requests for the next 5, 10 or 15 mins and then try again. I'd also reduce the timeout to something more like 5-10 seconds as it causes previews to take 25+ seconds to load just because of the timeout length.
Somewhat off-topic, but error codes aren't all that meaningful to end users either. It makes sense to store them that way for comparison in the code, but on the page we could use some messages instead that explain the errors better.
That mechanism is a great idea either way, throttling or sending requests if we're blocked or Jagex is having issues doesn't do any good. Any throttling or delaying mechanism could be implemented so we would return failure, noting it as temporary, then set the CacheTime to a reasonable value for retrying the request. The bad part about not throttling though would be on global cache invalidation, every request would occur, some would get through, the rest would be blocked, and after a certain period it would repeat and repeat until every page was handled. In figuring that mechanism, I realize memcached is needed just to reduce the load caused by previewing and for when the parser cache is disabled by certain parser functions. One option for a better user experience is to give previews a shorter timeout, but perhaps it should be reduced in either case, since if it hasn't returned after more than a few seconds it probably won't. I guess ultimately we need to find the balance between potentially long page loads and returning results without getting blocked.
On error handling, I agree, as implemented error handling has been left to the user and so far no one has implemented it (as seen from how Template:Hiscore broke without helpful feedback), at least in doing so, we gain support for #iferror.
@TehKittyCat Seems easier to have the discussion here rather than between 2 talk pages :)
Anyway, we currently have a cache that prevents more than 2 player lookups per page load, the next thing to look at is how to go about preventing mass lookups when they all suddenly get reloaded at once, e.g. the message cache rebuilding or a change to a widely used template that both cause every page to be re-parsed.
A combination of approaches can be used here: