Closed tomkukral closed 8 months ago
@LewisKSaint @bczoma can you review?
Hi @tomkukral , thanks for the contribution. Are you using affinity and tolerations in an HA deployment?
Yes, we have dedicated pools (by taints) for solace. I can share deployment examples tomorrow.
That would be great because one of the reasons affinity and tolerations are not supported in the Helm chart is because it is hard and err-prone to define rules for one StatefulSet governing three types of broker pods.
I will improve it a bit more and send example.
Out helm values.yaml
looks like
solace:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nodepool
operator: In
values:
- solace
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- pubsubplus-ha
topologyKey: kubernetes.io/hostname
tolerations:
- key: nodepool
operator: Equal
value: solace
effect: NoSchedule
and node have
apiVersion: v1
kind: Node
metadata:
labels:
agentpool: solace
...
name: aks-solace-0980-vmss000000
spec:
taints:
- effect: NoSchedule
key: nodepool
value: solace
This configuration give us an opportunity to select on which nodes(pools) shoudl solace run. Does this change make sense to you?
Thanks for the example rules @tomkukral . One question, are you making use of HA deployments and if so, scheduling an HA deployment with availability zones so that each broker pod in the HA redundancy group gets scheduled to a different availability zone?
We are running Kubernetes cluster in single zone so scheduling to different nodes within node pool dedicated for solace is enough for us.
Scheduling each replica (pod) to different zone should be possible by
solace:
affinity:
...
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- pubsubplus-ha
topologyKey: topology.kubernetes.io/zone
Can you consider merging this or provide feedback for improvements?
Hi Tomáš, yes we are considering this to be added in the next maintenance release, based on the code you suggested. Thank you for that! We don't have a date for it yet.
Thank you! I will be looking forward to have this in main line :+1:
Do you have any ETA for merging this? I'd like to use it on another environement
Hi Tomáš, not yet - the update to this repo is still waiting in the priority queue.
Hi guys. We'd like to use these functions as soon as possible - we're trying to replace existing solution with Solace. Unfortunately lack of control over affinity and tolerations (and if possible pod topology spread) @tomkukral (wink wink :) is a major blocker. We're a littlle worried also with a time needed for you guys to review this.
@marcinbojko feel free to add pod topology spread to this. Or I can add it next week.
However, I don't have any power to get this merged as I don't work for Solace. We ended up using my fork to deploy solace instead of upstream.
@tomkukral Absolutely understood, kudos for your work. My intention was to ask Solace representatives, not you, for expedite this.
I need to set annotations for pods so I'll be updating this PR to support it. I will check if pod topology spread can be supported.
I have added support for pod annotations and topologySpreadConstraints.
@marcinbojko can you verify it?
@tomkukral apologies, I'm no longer taking care of solace. But charts checks out
Closing this stale MR and will continue work in my fork.
This PR adds posibility to configure pod affinities and tolerations. This approach can be used to run solace on dedicated node pools.
Empty default are used so existing pods won't be affected.