This seems like the best way to approach with split/multi start_mitmreceiver.py - redis is shared and actively used across all mitmreceivers - no need to query each mitmreceiver or anything like that - they all will report to redis and you just check those redis keys in your monitoring tool :)
Just provide name of key (different for each start_mitmreceiver.py of course!) and it will start putting current MITMReceiver queue size to redis every 30 seconds. Each redis entry is stored with double TTL of reporting rate so you can even use this to detect if MITMReceivers are down :)
Setting redis_report_queue_key in config.ini or --redis_report_queue_key in command line enable this
This seems like the best way to approach with split/multi start_mitmreceiver.py - redis is shared and actively used across all mitmreceivers - no need to query each mitmreceiver or anything like that - they all will report to redis and you just check those redis keys in your monitoring tool :)
Just provide name of key (different for each start_mitmreceiver.py of course!) and it will start putting current MITMReceiver queue size to redis every 30 seconds. Each redis entry is stored with double TTL of reporting rate so you can even use this to detect if MITMReceivers are down :)
Setting
redis_report_queue_key
in config.ini or--redis_report_queue_key
in command line enable thisYou can override default 30 seconds window via
--redis_report_queue_interval
async version of monitoring dreaded
MITM data processing workers are falling behind! Queue length: XXX