contribsys / faktory

Language-agnostic persistent background job server
https://contribsys.com/faktory/
Other
5.78k stars 230 forks source link

Autoscaling faktory in kubernetes #429

Closed JoaoPedroAssis closed 1 year ago

JoaoPedroAssis commented 1 year ago

Hey guys, I am trying to deploy an application that uses Faktory and Python workers in kubernetes. Thinking about the problem and reading the relevant docs, some questions arised to me:

Reading about this, the best option seems to be to use the request queue time to trigger autoscaling, however Im not certain how to do this. Does someone have a tip? Thanks in advance!

mperham commented 1 year ago

Typically you'd use queue backlog, latency or the busyness of your local Worker process. If the worker is 100% utilized for 2 minutes, add another worker up to N workers, etc. Up to you to define your metrics.

sandyydk commented 1 year ago

I would like to do something similar, say scale based on queue size? How can this be done in k8s?

jbielick commented 1 year ago

@sandyydk I think you have a couple options:

  1. (easier) use something like KEDA to query the queue latency or lengths via the faktory HTTP API and scale based on the length
  2. (harder) ingest the queue latency or length metrics into a custom metrics server and use those custom metrics to scale via HorizontalPodAutoscaler (docs on using custom metrics in HPA here)
JoaoPedroAssis commented 1 year ago

@jbielick Thanks for the input, I absolutely hate using HPA custom metrics haha

Nevertheless, the KEDA solution is best used to scale the faktory server itself or the workers? Correct me if I'm wrong, but if the workers are scaling based on the faktory queue latency/lenght there is no need to scale the server

@mperham Thanks for the reply as well

jbielick commented 1 year ago

Nevertheless, the KEDA solution is best used to scale the faktory server itself or the workers?

The workers.