In the current Druid Operator, when rollingDeploy is enabled, the expectation is that the nodes will restart one at a time in the pre-defined order. In the case where we have multiple tiers within historicals, that is equivalent to having multiple Stateful Sets of NodeType historical. The Operator then does not stop to check whether each historical tier Statefulset is deployed and ends up deploying all historical tiers one after the other without waiting for a full deployment of the previous StatefulSet.
This PR aims to solve this issue by introducing a check on all historical tiers present in that cluster, if rollingDeploy is enabled, before going ahead with the next tier's deployment. In the specific case when we have replicas of datasource distributed across multiple tiers, we do not want all the tiers going down simultaneously (with rollingDeploy enabled), which might result in none of the segments being available, and hence downtime. This PR would solve that.
This PR has:
[x] been tested on a real K8S cluster to ensure creation of a brand new Druid cluster works.
[x] been tested for backward compatibility on a real K*S cluster by applying the changes introduced here on an existing Druid cluster. If there are any backward incompatible changes then they have been noted in the PR description.
[x] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
[ ] added documentation for new or modified features or behaviors.
Description
In the current Druid Operator, when rollingDeploy is enabled, the expectation is that the nodes will restart one at a time in the pre-defined order. In the case where we have multiple tiers within historicals, that is equivalent to having multiple Stateful Sets of NodeType historical. The Operator then does not stop to check whether each historical tier Statefulset is deployed and ends up deploying all historical tiers one after the other without waiting for a full deployment of the previous StatefulSet.
This PR aims to solve this issue by introducing a check on all historical tiers present in that cluster, if rollingDeploy is enabled, before going ahead with the next tier's deployment. In the specific case when we have replicas of datasource distributed across multiple tiers, we do not want all the tiers going down simultaneously (with rollingDeploy enabled), which might result in none of the segments being available, and hence downtime. This PR would solve that.
This PR has:
Key changed/added files in this PR
MyFoo
OurBar
TheirBaz