Closed hansdg1 closed 3 years ago
Upon further digging, there are a ton of defunct chromium processes that aren't getting killed
$ ps auxf
...
root 86382 2.1 2.3 124308 95400 ? Ss 11:25 0:23 \_ python /src/run.py --alerter email --email <redacted> --relay <redacted>
root 86471 0.0 0.0 0 0 ? Z 11:25 0:00 \_ [chromium] <defunct>
root 86472 0.0 0.0 0 0 ? Z 11:25 0:00 \_ [chromium] <defunct>
root 86493 0.0 0.0 0 0 ? Z 11:25 0:00 \_ [chromium] <defunct>
root 86502 0.0 0.0 0 0 ? Z 11:25 0:00 \_ [chromium] <defunct>
root 86513 0.0 0.0 0 0 ? ZN 11:25 0:00 \_ [chromium] <defunct>
root 86550 0.0 0.0 0 0 ? Z 11:25 0:00 \_ [chromium] <defunct>
root 86551 0.0 0.0 0 0 ? Z 11:25 0:00 \_ [chromium] <defunct>
root 86583 0.0 0.0 0 0 ? Z 11:25 0:00 \_ [chromium] <defunct>
...(continues)
Haven't ran the same commands as you to verify, but twice I have been unable to run any new processes on my centos machine (including top) as well as crashed containers, so I have to reboot. Never done this until I started running this container yesterday.
Just ran stats and my amazon container (checking for rtx 3080) had over 5600 PIDS. Ouch.
I also noticed that a container built from config/ps5.yaml also experiences this same issue. Presumably because it includes an amazon listing.
I don't have the time to dig into this at the moment, but I wanted to at least share what I found.
Hi all, I'm aware of the issue where the selenium driver (which is used for Amazon web scrapes) creates an infestation of chromium zombies. I'm working on a fix.
I kind of poked around, but I suck at python. I did create a workaround that seems to be helping for me. I created a cronjob that restarts my amazon container every hour. docker restart <container_id>
Additionally, to help with some occasional human prompts from newegg that crash the container, I updated my newegg containers with the --restart flag. Could probably do all of them and they would restart on reboot as well.
docker update --restart unless-stopped <container_id>
I fixed this issue by using the method outlined here. This is implemented in PR #76 Basically explicitly stop and reap the children is my understanding of it.
Please pull the latest image using:
$ docker pull ericjmarti/inventory-hunter:latest
Had an issue last night where my docker host stopped allowing new processes. After some digging, it seems like the
amazon_rtx_3070
container may be responsible for this. Compared to the others, it's forking a ton of processes. Has anyone else seen this?This screenshot was taken a few minutes after launch
$ docker stats
I'm using the default
config/amazon_rtx_3070.yaml
from the latest commitHere's the docker logs output for the container. Not sure if the
missing title
entries could be related to this or not.