Closed munnerz closed 6 years ago
In CassandraCluster:
In ElasticsearchCluster:
elasticsearchcluster.spec.nodePools.config
in which the user (usually) specifies an entire elasticsearch.yml to use. This creates a hard dependency between the NODE_MASTER
, NODE_INGEST
and NODE_DATA
boolean env vars set by the Pilot, and the users config in elasticsearch.yml.I've planned for a while to allow the user to not have to specify all of the fields in this file, and instead have the pilot automatically inject config into the file (i.e. have the elasticsearchcluster.spec.nodePools.config["elasticsearch.yml"]
key append instead of replace).
I'm not sure where best the tradeoff lies between overly complex ElasticsearchCluster manifests and the right level of customisability.
/cc @cehoffman @wallrj
In https://github.com/jetstack/navigator/issues/194#issuecomment-356930201 @munnerz wrote:
should we expose CqlPort? Can we manage it ourselves for now until such time that it's required to change it?
Probably not. I've looked at GoCQL and its cluster discovery mechanism and I think the CQL loadbalancing service may not be necessary and may actually confuse the discovery process....since the loadbalancer IP address isn't one of the addresses advertised by the cluster nodes. See #232
In which case, we may not need the readiness probe either 🤔
per our conversation yesterday, do we want to expose multiple node pool support right now? We could switch to this model in future more easily than we can switch back.
I'm think the nodepool may be the mechanism by which we support multiple racks and datacentres. See #227
And perhaps you'd add another node pool if you want to change the size or class of the disks attached to nodes. Add a new pool with desired disks. Remove the pool with original disks. See #233
is ReadyReplicas used at the moment?
Not yet. But hopefully soon. See #140
I think we should remove sysctls
and document an alternative.
Seems like it should be something configured when provisioning the kubernetes node.
And then there should be a way of scheduling elasticsearch pods to only the nodes where those sysctls have been configured.
I'm pretty much in agreement at the moment. The only sysctl settings I think that should be allowed through are those in the safe/whitelist that kubelet knows how to apply through annotations on the pod. https://kubernetes.io/docs/concepts/cluster-administration/sysctl-cluster/#setting-sysctls-for-a-pod
Before we close this, we should make sure we don't return 'null' as a field value anywhere, else clients could get confused.
I've seen it at various points in our API, and for now I think we'll need to run a manual check. Perhaps some kind of basic checking of default values for all fields results in a resource returned with no null
s set.
We want to cut a 'stable' v1alpha1 API that can be depended upon, and begin working in the v1alpha2 API.
Before we do this, in the brief remaining window where we can perform breaking API changes, we should make our 'last' API changes to v1alpha1.
I'd like to call a review on all of our resources:
(taken from: https://github.com/jetstack/navigator/blob/afed2beb6a1f9638fb2b2ffe9fdad9b73704717c/pkg/apis/navigator/v1alpha1/types.go)