it seems that whenever i do a druid configuration change that requires the historicals to be exchanged, they are just getting their StatefulSet updated.
So as soon as they are ready, the next pod is being replaced.
The healthiness/readiness probes do not consider that the cluster itself is in a healthy enough state for replacing the next historical pod afaik.
From the Orchestration side the app is healthy, from the software side the system as a whole is not - due to missing segments, which still need to be loaded.
Upon updating historicals, I'd like to suggest to take this into account:
Amount of "Segments to load" - should be ideally 0, only then move on
Amount of underreplication - should be ideally 0 or low enough to not cause any "Datasource not 100% available" scenarios, only then move on
I am aware that this might become a big changeset, as all of a sudden many more things have to be considered upon historical replacement.
Maybe this could become a feature flag that you have to enable first to test the changed behavior before rolling it out in a next-next version?
Hello,
it seems that whenever i do a druid configuration change that requires the historicals to be exchanged, they are just getting their StatefulSet updated. So as soon as they are ready, the next pod is being replaced. The healthiness/readiness probes do not consider that the cluster itself is in a healthy enough state for replacing the next historical pod afaik. From the Orchestration side the app is healthy, from the software side the system as a whole is not - due to missing segments, which still need to be loaded.
Upon updating historicals, I'd like to suggest to take this into account:
I am aware that this might become a big changeset, as all of a sudden many more things have to be considered upon historical replacement. Maybe this could become a feature flag that you have to enable first to test the changed behavior before rolling it out in a next-next version?