Open aaronchar opened 7 years ago
Thank you for the feedback.
The coordinators should scale up/down properly. Can you post your resulting yaml file so I can take a look, hopefully replicate it on my end, and then figure out what's going on?
Also, what does your Kubernetes environment look like (self-hosted, AWS, Google)? If self-hosted, what networking add-on are you using?
Sure i will try and find some time this afternoon to do that. One of the main things i changed other then the volume was adding this in
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": [
{
"labelSelector": {
"matchExpressions": [
{
"key": "arangodb-coord",
"operator": "In",
"values": ["yes", "true"]
}
]
}
}
]
},
}```
First off nice setup, I went ahead and give this a shot and it works perfectly with how you have it setup.
However i noticed once i started using persistent storage I need to modify a few things around host names etc. Agents and database seem to work fine but I am getting caught up with the coordinators. It seems to duplicate them every time i take the cluster up and down. (4 -> 8 etc). They are all reporting as alive even the IP's being pointed to no longer exist and thus traffic is trying to be routed to them.
Just curious if i am missing something, I was under the impression that they are supposed to be stateless