Closed Ortovoxx closed 3 years ago
Yes, that is true. Caching results is necessary to keep the API from crashing under high load. The current implementation is not easily scalable and does not fail gracefully in case of errors.
There is no way to turn the cache off from a request perspective at this point. I could lower the cache a bit to something like 1 minute. Having a lower cache does not make much sense compared to the time it takes to execute 'steamcmd' commands and taking cache at Steam itself into account.
Would a cache of 1 minute solve it for now? Besides that I want to rewrite the API and split the engine into separate workers (that allow for scaling). Until then you can always host the API yourself by running the code or use the official Docker image.
It makes sense why you have added it I understand.
A 1 minute cache would solve the problem temporarily until I can get the docker image up and running myself which seems to be the only solution as I only need data ( actually just the build IDs ) from 2 specific apps.
The worker solution does sound like the best bet. I assume this means implement some sort of queue to queue incoming requests from the api and then the worker will handle those requests as and when they can.
Thank you for providing the service. Do let me know if I can help in any way.
Cool, 1 minute should probably still be high enough still to avoid crashing the API. In PR https://github.com/steamcmd/api/pull/21 the cache has been lowered to 1 minute (and will automatically be applied to the live API).
And I understand. If the API would be stable enough with a lower cache TTL I woud lower it but it kept crashing way too much. The steamcmd
command that is run with every request is a pretty slow process which makes every request pretty resource intensive. Let me know if you need helping hosting your own instance of the API. I would be happy to help out.
Thank you for your recognition :)
Maybe it is worth increasing the cache time back to 5 minutes after looking at the recent uptime
I will self host the api to prevent you having to accomodate a single user. Thank you anways.
May I ask whereabouts you host?
Yes, you are right. Thank you for your message! For now I increase the TTL again to 2 minutes, let's see if it holds.
Alright, I understand. I wish the API would be stable enough in it's current state to easily be able to process these requests but that is sadly not the case.
Of course! The API is currently hosted for free on Heroku proxied through Cloudfront. Part of the stability issues is because of the free limitations of Heroku (memory limit and connection duration limit of 30s) but the issues are mostly caused because of the way the API currently excutes steamcmd
in a blocking way.
I noticed that you have implemented a caching mechanism recently. Is there a way to turn this off? I can't find one.
If you are using it to avoid api spam potentially an auth system with tokens may be in order? If not I will probably self host the api