Closed JoaoPedroAssis closed 10 months ago
You can get current queue sizes trivially:
% curl http://localhost:7420/stats
{
"faktory" : {
"queues" : {
"queue0" : 64,
"queue1" : 67,
"queue2" : 54,
"queue3" : 53,
"queue4" : 72
},
"tasks" : {
"Busy" : {
"reaped" : 0,
"size" : 0
},
"Dead" : {
"cycles" : 1,
"enqueued" : 0,
"size" : 0,
"wall_time_sec" : 9.1292e-05
},
"Retries" : {
"cycles" : 13,
"enqueued" : 0,
"size" : 0,
"wall_time_sec" : 0.001088125
},
"Scheduled" : {
"cycles" : 13,
"enqueued" : 0,
"size" : 0,
"wall_time_sec" : 0.005740415
},
"Workers" : {
"reaped" : 0,
"size" : 0
}
},
"total_enqueued" : 310,
"total_failures" : 310,
"total_processed" : 30004,
"total_queues" : 5
},
"now" : "2023-09-21T15:00:38.439896Z",
"server" : {
"command_count" : 0,
"connections" : 0,
"description" : "Faktory",
"faktory_version" : "1.8.0",
"uptime" : 64,
"used_memory_mb" : 7
},
"server_utc_time" : "15:00:38 UTC"
}
Latency requires Faktory Enterprise and Statsd. https://github.com/contribsys/faktory/wiki/Ent-Metrics#latency
Thanks for the response!
Hello @mperham !
I'm trying to implement an efficient autoscaling process in my appliocation, which uses python workers and a Rails api to enqueue jobs. Like many of the previous developers, I need a way to access queue latency/size to optimize my autoscaler (currently on worker cpu utilization, since they perform cpu intensive tasks)
How can I get these metrics? Ideally I would use KEDA autoscaling, but i dont know how to get the desired info. Someone suggested using Fakotry API to gather these metrics, but is not clear how to implement this in k8s. My application is already up and running really well using the HELM chart!