Closed navi705 closed 1 year ago
Which version of the image are you using? A couple of versions contained a memory leak. The latest version contains the fix. Sadly I did not yet had the time to find the actual issue but I fixed it for now by setting a fixed amount of requests a worker will process before being forced to restart. Keep in mind that by default 4 workers are running in the container so the automatic restart of the separate workers should not impact the availability.
Can you try using the latest
tag or specify the latest version v1.13.1
?
I apologise, but I'm noob to this and couldn't find the version of the current container. But I can say with certainty that yesterday I put the latest version on my friend's computer and he had a similar situation, and after about 1 hour . As soon as he wakes up, I will discount his error if necessary. And also wanted to say thank you very much for this program.
Interesting. It would help to double check which version you friend is running. Perhaps it was cached somewhere and he does not yet use the version that contains the "fix". Do you set a different startup CMD
by any chance? Because the "fix" is not in the code but in the startup CMD
of the container.
See this line in the Dockerfile:
CMD gunicorn main:app --max-requests 3000 --max-requests-jitter 150 --workers $WORKERS --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:$PORT
Please post as much information as possible. Although I really think that for some reason this "fix" missing and that's why you are encountering issues after x amount of hours.
We didn't modify cmd. Just checked with a friend who has the latest version. The string cmd has fix. It may be because we have windows? We installed like that docker pull steamcmd/api:latest docker run -p 8000:8000 -d steamcmd/api:latest. What other information may be useful ?
That looks good. What kind of requests do you (and your friend) normally do? Do you request info on just 1 app id or several? One thing you could try is to set the --max-requests
value to a much lower value. Seeing this is set in the CMD this looks something like this:
docker run -p 8000:8000 -d steamcmd/api:latest gunicorn main:app --max-requests 500 --max-requests-jitter 150 --workers 4 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000
Also can you perhaps the memory usage of the container/processes in the container? A tool like ctop can easily show you the usage. It might help to check the usage like very 15 minutes and see if the cpu and/or memory usage slowly increases.
We're going through all the steam id's. We're trying to build a database. CPU is behaving normally, maxing out at 3%. And RAM is growing at 218 megabytes in 30 minutes. A temporary solution on Windows, I decided to restart the container.
PowerShell
While ($True) {
Start-Sleep -Seconds 1800
docker restart container id
}
I would also like to ask if it is intended that in the new version logs are displayed with some frequency. Not like before with every request ?
I would like to thank you again for your hard work!
Your findings confirm my suspicion that you are still experiencing the memory leak. This should've been fixed with the --max requests
settings. Especially setting to a much lower value as I recommended. Perhaps Docker does not successfully pulls the latest image or the CMD
overwritten somewhere. Could you run an inspect on the running container and post the output here:
docker ps
docker inspect <container-id>
Regarding the displaying of the logs. The current logging is really basic and is mostly used to show when app info is not retrieved from cache and the amount of tries it took to retrieve app information. Sometimes the steam module uses a list of steam servers that fail and therefor a retry mechanism was build-in.
If you mean that these loglines in general sometimes take a while to appear; you can use PYTHONUNBUFFERED=1
. If you set this environment variable in your container then it should not buffer the output and push it straight to stdout. You can simply add this by adding it to the startup command:
docker run -e "PYTHONUNBUFFERED=1" -p 8000:8000 -d steamcmd/api:latest
Thank you for the kind words :)
docker run -p 8000:8000 -d steamcmd/api:latest gunicorn main:app --max-requests 500 --max-requests-jitter 150 --workers 4 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000 -e "PYTHONUNBUFFERED=1"
That solved the problem thank you.
I am using a docker container. This error appears in the logs after continuously requesting information for 1 to 4 hours. At first I thought it's a limitation from constant requests to steam and it's blocking me, but if I restart docker container the error disappears. There are no issues with appid, if I re-request the information again there are no errors. Is this a bug or am I missing something?