Operator can automatically heal an unhealthy operand
Auto-tuning
Operator is able to tune the operand for ideal performance
Workload optimization
Operator dynamically shift workloads onto the best suited nodes
Abnormality detection
Operator is able to learn the normal performance patterns of operand
Operator is able to detect abnormalities outside of normal patterns
Tasks
[ ] Can your operator read metrics such as requests per second or other relevant metrics and auto-scale horizontally or vertically, i.e., increasing the number of pods or resources used by pods?
[ ] Based on question number 1 can it scale down or decrease the number of pods or the total amount of resources used by pods?
[ ] Based on the deep insights built upon level 4 capabilities can your operator determine when an operand became unhealthy and take action such as redeploying, changing configurations, restoring backups etc.?
[ ] Again considering that with level 4 deep insights the operator has information to learn the performance baseline dynamically and can learn the best configurations for peak performance can it adjust the configurations to do so?
[ ] Can it move the workloads to better nodes to do so?
[ ] Can it detect and alert when anything is working below the learned performance baseline that can’t be corrected automatically?
Auto-scaling
Auto-Healing
Auto-tuning
Workload optimization
Abnormality detection
Tasks