istresearch / scrapy-cluster

This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster.
http://scrapy-cluster.readthedocs.io/
MIT License
1.18k stars 323 forks source link

Kafka-Monitor stats can potentially accumulate indefinitely if Redis restarts/fails. #186

Closed devspyrosv closed 5 years ago

devspyrosv commented 6 years ago

If the Redis DB restarts on demand or after a failure, Kafka-Monitor is unable to clean up its stats which reside into Redis and those stats continue to increase in size and subsequently Redis uses more and more RAM.

This happens because Kafka-Monitor collects stats using the RollingTimeWindow class ( you can find it in scutils/stats_collector.py). RollingTimeWindow extends ThreadedCounter. ThreadedCounter uses a thread that runs the function __mainloop.

Inside __mainloop the expiration and cleanup of the stats takes place. To do this the thread receives the redis connection when it's spawned and has no control over it while it is running.

If now for some reason Redis DB restarts that thread has no connection, fails and if Kafka-Monitor is at a stage that doesn't need a redis connection and in the meantime Redis DB manages to restart, Kafka-Monitor can continue running and dumping stats to redis without knowing that the thread is down.

madisonb commented 6 years ago

Fairly certain we just need to add a try/catch around the block here. Staying within the while loop, but ensuring the thread doesn't die if a redis exception occurs.

madisonb commented 5 years ago

Closed via #191