microcks / microcks-ansible-operator

Kubernetes Operator for easy setup and management of Microcks installs
https://microcks.io
Apache License 2.0
26 stars 6 forks source link

Support auto-scaling for the main microcks Deployment #109

Closed robvalk closed 1 month ago

robvalk commented 1 year ago

Reason/Context

Microcks's high performance makes it deal for both functional and performance testing. However fixed scaling for the core microcks Deployment makes it hard to scale the microcks layer up and down for performance testing.

Description

Supporting auto-scaling of the microcks Deployment , via standard k8s auto-scaling constructs.

Implementation ideas

Include a HorizontalPodAutoscaler in the k8s resources managed from the Microcks CR. Or make the Microcks CR implement the scale sub-resource, allowing e.g. HPA or KEDA to control auto-scaling.

yada commented 1 year ago

Hi @robvalk,

I'm sorry for the long delay in our answer on this issue.

This topic is essential for Microcks adopters running it at scale and with advanced GitOps expertise and practices. So, we are working on a new operator to manage all Microcks configurations using customer resources for advanced community users.

Please see and join us on this new repo: https://github.com/microcks/microcks-operator

Looking forward to your ideas/comments/validation on this new operator (WIP).

Regards, Yacine

github-actions[bot] commented 2 months ago

This issue has been automatically marked as stale because it has not had recent activity :sleeping:

It will be closed in 30 days if no further activity occurs. To unstale this issue, add a comment with a detailed explanation.

There can be many reasons why some specific issue has no activity. The most probable cause is lack of time, not lack of interest. Microcks is a Cloud Native Computing Foundation project not owned by a single for-profit company. It is a community-driven initiative ruled under open governance model.

Let us figure out together how to push this issue forward. Connect with us through one of many communication channels we established here.

Thank you for your patience :heart: