steamcmd / api

Read-only API interface for steamcmd app_info
https://www.steamcmd.net
MIT License
59 stars 8 forks source link

After a lot of request for a info, an error appears #48

Closed navi705 closed 1 year ago

navi705 commented 1 year ago

I am using a docker container. This error appears in the logs after continuously requesting information for 1 to 4 hours. At first I thought it's a limitation from constant requests to steam and it's blocking me, but if I restart docker container the error disappears. There are no issues with appid, if I re-request the information again there are no errors. Is this a bug or am I missing something?

INFO:     172.17.0.1:33326 - "GET /v1/info/1370580 HTTP/1.1" 500 Internal Server Error
2023-07-19 14:35:43     raw_response = await run_endpoint_function(
2023-07-19 14:35:43   File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 165, in run_endpoint_function
2023-07-19 14:35:43     return await run_in_threadpool(dependant.call, **values)
2023-07-19 14:35:43   File "/usr/local/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
2023-07-19 14:35:43     return await anyio.to_thread.run_sync(func, *args)
2023-07-19 14:35:43   File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
2023-07-19 14:35:43     return await get_asynclib().run_sync_in_worker_thread(
2023-07-19 14:35:43   File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
2023-07-19 14:35:43     return await future
2023-07-19 14:35:43   File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
2023-07-19 14:35:43     result = context.run(func, *args)
2023-07-19 14:35:43   File "/data/./main.py", line 49, in read_app
2023-07-19 14:35:43     if not info["apps"]:
2023-07-19 14:35:43 TypeError: 'NoneType' object is not subscriptable
2023-07-19 14:35:44 Traceback (most recent call last):
2023-07-19 14:35:44   File "src/gevent/greenlet.py", line 908, in gevent._gevent_cgreenlet.Greenlet.run
2023-07-19 14:35:44   File "/usr/local/lib/python3.10/site-packages/steam/core/connection.py", line 97, in _reader_loop
2023-07-19 14:35:44     rlist, _, _ = gselect([self.socket], [], [])
2023-07-19 14:35:44   File "/usr/local/lib/python3.10/site-packages/gevent/select.py", line 175, in select
2023-07-19 14:35:44     sel_results = _original_select(rlist, wlist, xlist, 0)
2023-07-19 14:35:44 ValueError: filedescriptor out of range in select()
2023-07-19 14:35:44 2023-07-19T07:35:44Z <Greenlet at 0x7f9ee4e7a0c0: <bound method Connection._reader_loop of <steam.core.connection.TCPConnection object at 0x7f9ee5210940>>> failed with ValueError
2023-07-19 14:35:44 
2023-07-19 14:35:48 ERROR:    Exception in ASGI application
2023-07-19 14:35:48 Traceback (most recent call last):
2023-07-19 14:35:48   File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 407, in run_asgi
2023-07-19 14:35:48     result = await app(  # type: ignore[func-returns-value]
2023-07-19 14:35:48   File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
2023-07-19 14:35:48     return await self.app(scope, receive, send)
2023-07-19 14:35:48   File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 271, in __call__
2023-07-19 14:35:48     await super().__call__(scope, receive, send)
2023-07-19 14:35:48   File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 118, in __call__
2023-07-19 14:35:48     await self.middleware_stack(scope, receive, send)
2023-07-19 14:35:48   File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
2023-07-19 14:35:48     raise exc
2023-07-19 14:35:48   File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
2023-07-19 14:35:48     await self.app(scope, receive, _send)
2023-07-19 14:35:48   File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
2023-07-19 14:35:48 Connecting via steamclient
2023-07-19 14:35:48 Retrieving app info for: 1370600, retry count: 1
2023-07-19 14:35:48 Failed in retrieving app info for app id: 1370600
2023-07-19 14:35:48 This operation would block forever
2023-07-19 14:35:48     Hub: <Hub '' at 0x7f9ee5cf4900 epoll pending=0 ref=0 fileno=1025 thread_ident=0x7f9ee5aff700>
2023-07-19 14:35:48     Handles:
2023-07-19 14:35:48 []
jonakoudijs commented 1 year ago

Which version of the image are you using? A couple of versions contained a memory leak. The latest version contains the fix. Sadly I did not yet had the time to find the actual issue but I fixed it for now by setting a fixed amount of requests a worker will process before being forced to restart. Keep in mind that by default 4 workers are running in the container so the automatic restart of the separate workers should not impact the availability.

Can you try using the latest tag or specify the latest version v1.13.1?

navi705 commented 1 year ago

I apologise, but I'm noob to this and couldn't find the version of the current container. But I can say with certainty that yesterday I put the latest version on my friend's computer and he had a similar situation, and after about 1 hour . As soon as he wakes up, I will discount his error if necessary. And also wanted to say thank you very much for this program.

jonakoudijs commented 1 year ago

Interesting. It would help to double check which version you friend is running. Perhaps it was cached somewhere and he does not yet use the version that contains the "fix". Do you set a different startup CMD by any chance? Because the "fix" is not in the code but in the startup CMD of the container.

See this line in the Dockerfile:

CMD gunicorn main:app --max-requests 3000 --max-requests-jitter 150 --workers $WORKERS --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:$PORT

Please post as much information as possible. Although I really think that for some reason this "fix" missing and that's why you are encountering issues after x amount of hours.

navi705 commented 1 year ago

We didn't modify cmd. Just checked with a friend who has the latest version. The string cmd has fix. It may be because we have windows? We installed like that docker pull steamcmd/api:latest docker run -p 8000:8000 -d steamcmd/api:latest. What other information may be useful ?

jonakoudijs commented 1 year ago

That looks good. What kind of requests do you (and your friend) normally do? Do you request info on just 1 app id or several? One thing you could try is to set the --max-requests value to a much lower value. Seeing this is set in the CMD this looks something like this:

docker run -p 8000:8000 -d steamcmd/api:latest gunicorn main:app --max-requests 500 --max-requests-jitter 150 --workers 4 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000

Also can you perhaps the memory usage of the container/processes in the container? A tool like ctop can easily show you the usage. It might help to check the usage like very 15 minutes and see if the cpu and/or memory usage slowly increases.

navi705 commented 1 year ago

We're going through all the steam id's. We're trying to build a database. CPU is behaving normally, maxing out at 3%. And RAM is growing at 218 megabytes in 30 minutes. A temporary solution on Windows, I decided to restart the container.

PowerShell

 While ($True) {
      Start-Sleep -Seconds 1800
      docker restart container id
  }

I would also like to ask if it is intended that in the new version logs are displayed with some frequency. Not like before with every request ?

I would like to thank you again for your hard work!

jonakoudijs commented 1 year ago

Your findings confirm my suspicion that you are still experiencing the memory leak. This should've been fixed with the --max requests settings. Especially setting to a much lower value as I recommended. Perhaps Docker does not successfully pulls the latest image or the CMD overwritten somewhere. Could you run an inspect on the running container and post the output here:

docker ps
docker inspect <container-id>

Regarding the displaying of the logs. The current logging is really basic and is mostly used to show when app info is not retrieved from cache and the amount of tries it took to retrieve app information. Sometimes the steam module uses a list of steam servers that fail and therefor a retry mechanism was build-in.

If you mean that these loglines in general sometimes take a while to appear; you can use PYTHONUNBUFFERED=1. If you set this environment variable in your container then it should not buffer the output and push it straight to stdout. You can simply add this by adding it to the startup command:

docker run -e "PYTHONUNBUFFERED=1" -p 8000:8000 -d steamcmd/api:latest 

Thank you for the kind words :)

navi705 commented 1 year ago
docker run -p 8000:8000 -d steamcmd/api:latest gunicorn main:app --max-requests 500 --max-requests-jitter 150 --workers 4 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000 -e "PYTHONUNBUFFERED=1"

That solved the problem thank you.