paquettg / leaguewrap

League of Legend API wrapper
MIT License
67 stars 28 forks source link

improvement ideas / missing features #114

Open Inve1951 opened 8 years ago

Inve1951 commented 8 years ago

the api is nice and good as is but i'm missing some features:

dnlbauer commented 8 years ago

For number 1 I disagree because I think if you need async requests you can wrap the leaguewrap calls and make them async on your own. This will just blow up the code without much gain. But that's just my opinion.

I agree on number 2 that something else then an md5 passed to the cache might be more suitable for custom implementations of CacheInterface. It should be better to pass something like name/id and region. The memcached implementation can then calculate the MD5 internally and every other cache can choose it's own implementation for keys.

I also like the idea of different ttls.

Pull requests are always welcome ;)

On Wed, Apr 6, 2016, 6:07 PM Inve1951 notifications@github.com wrote:

the api is nice and good as is but i'm missing some features:

-

async requests to make multiple calls simultaneously instead of chaining them this will reduce page loading times a lot (e.g. request MatchDto's for a bunch of matches) (guzzle supports async requests via Promises)

create the cash key inside the Cache class this will allow for easy hook-up of databases (could be done by passing an associative array) there's no way you can pull info from a database with just the md5 key

separate remember times (TTL) for successful api calls and http errors error caching makes sense to me, but why remember those for the same duration as you do with Dto's? e.g. when riot gives a 503 error on a reliable request it makes no sense to cache it for much longer than a minute

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/paquettg/leaguewrap/issues/114

Inve1951 commented 8 years ago

For number 1 I disagree because I think if you need async requests you can wrap the leaguewrap calls and make them async on your own. This will just blow up the code without much gain. But that's just my opinion.

This could be easily handled with an implementation of ClientInterface, indeed. Then all what's left to do is to toggle on/off async on the ClientInterface instance. Didn't think of this way to do it earlier. I might write this soon (in a week or two), once i need it, and if you want, I'll upload it ofc.

The other two points haven't been a big deal for me up until now, since i can just clear the cache and my project is still in an early state where a database isn't needed. Those points however will arise again for me and in case nobody implemented those by then i might do it.

Btw limits are unique per api-key. Atm to be able to run multiple projects/api-keys on the same server without sacrificing your rate limits, you'd have to e.g. prefix the md5-key in a custom CacheInterface implementation per project.

dnlbauer commented 8 years ago

Btw limits are unique per api-key. Atm to be able to run multiple projects/api-keys on the same server without sacrificing your rate limits, you'd have to e.g. prefix the md5-key in a custom CacheInterface implementation per project.

Yes you are right. It goes even further. Limits are also unique per region so if riot gives you like 3000 req/10 seconds you can do 3k to euw and another 3k to NA in that time interval. Its correct that the current implementation doesnt take this into account. Maybe you should open another issue with that to track it seperatly.

If you find yourself into patching CacheInterface in the described way dont hesitate to make a pull. Im currently using my own database layer above leaguewrap just because this isnt possible yet (no time to fix it :/)

Inve1951 commented 8 years ago

The region is taken into account already: $this->key = "leagueWrap.hits.$region.$hits.$seconds"; Since i'm running windows and php7x64, where MemcacheD isn't available as a pecl extension i wrote my own limit class for wincache. This also means i won't be able to test any changes done to the stock Cache class on my current devenv.

dnlbauer commented 8 years ago

I also dont want to do changes like that until @paquettg reports back because this will be a breaking change for custom Cache implementations and therefor It would be nice to bump the composer version number before doing this. This way everyone on dev-master that dont want to adopt that change immediatly can switch to packegist version 0.6.4.