Open pavolloffay opened 7 years ago
I am trying C* deployment from k8s: https://github.com/kubernetes/examples/blob/master/cassandra/README.md
I could not use this C* because image jaegertracing/jaeger-cassandra-schema
uses different version of cql
:
Connection error: ('Unable to connect to any servers', {'172.17.0.6': ProtocolError("cql_version '3.4.0' is not supported by remote (w/ native protocol). Supported versions: [u'3.4.2']",), '172.17.0.4': ProtocolError("cql_version '3.4.0' is not supported by remote (w/ native protocol). Supported versions: [u'3.4.2']",), '172.17.0.5': ProtocolError("cql_version '3.4.0' is not supported by remote (w/ native protocol). Supported versions: [u'3.4.2']",)})
kubectl delete po/cassandra-0
kubectl exec -it cassandra-0 -- nodetool status 6:07
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.17.0.5 83.18 KiB 32 76.9% de54c5c1-17db-4a84-bdf4-afb459c83576 Rack1-K8Demo
UN 172.17.0.4 109.09 KiB 32 65.2% 79f692db-5b4b-4111-89fc-3df7e9408ce3 Rack1-K8Demo
UN 172.17.0.6 102.25 KiB 32 58.0% e30000e3-4332-434c-81c6-408a9d0671a4 Rack1-K8Demo
kubectl scale sts cassandra --replicas=4
kubectl exec -it cassandra-0 -- nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.17.0.5 83.18 KiB 32 53.9% de54c5c1-17db-4a84-bdf4-afb459c83576 Rack1-K8Demo
UN 172.17.0.4 166.83 KiB 32 43.4% 79f692db-5b4b-4111-89fc-3df7e9408ce3 Rack1-K8Demo
UN 172.17.0.7 65.66 KiB 32 50.3% 67a30197-ed1a-4496-9884-c9ec0bb5347a Rack1-K8Demo
UN 172.17.0.6 65.65 KiB 32 52.4% e30000e3-4332-434c-81c6-408a9d0671a4 Rack1-K8Demo
kubectl patch sts cassandra -p '{"spec":{"replicas":3}}'
kubectl exec -it cassandra-0 -- nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.17.0.5 101.09 KiB 32 53.9% de54c5c1-17db-4a84-bdf4-afb459c83576 Rack1-K8Demo
UN 172.17.0.4 166.83 KiB 32 43.4% 79f692db-5b4b-4111-89fc-3df7e9408ce3 Rack1-K8Demo
DN 172.17.0.7 65.66 KiB 32 50.3% 67a30197-ed1a-4496-9884-c9ec0bb5347a Rack1-K8Demo
UN 172.17.0.6 65.65 KiB 32 52.4% e30000e3-4332-434c-81c6-408a9d0671a4 Rack1-K8Demo
Is this outcome different than what we have, or is it just a confirmation that we are doing the same as the reference is doing?
By reference do you mean cassandra deployment from k8s examples? The outcome was to get more familiar with C* deployment on K8s and explore what works/does not work in our deployment (maybe for future improvements).
I strongly agree with you that currently, we should just say that our deployment has a limited functionality.
By reference do you mean cassandra deployment from k8s examples?
Yes :) I'm just not sure what is the relevant part on that comment, as I wouldn't know how to compare that with the "expected" output, or with the output from our template.
UN 172.17.0.5 83.18 KiB 32 53.9% de54c5c1-17db-4a84-bdf4-afb459c83576 Rack1-K8Demo
If you delete a pod and it recovers as expected it should be UN
when running nodetool status
on all C* nodes. On the scaling down you can see that one did not recover properly (well it shouldn't be listed at all, but it is shown as DN).
curious if you ever figured this out @pavolloffay, I'm experiencing the same and wondering how to scale down.
@hobbs C template provided in this repo is not production ready, use other templates or helm charts to create scalable C deployment
I get
Cannot achieve consistency level LOCAL_ONE
after I have manually deleted C pod. Sometimes it recovered, sometimes it returned this error. C logs show this:related issues: https://github.com/kubernetes/kubernetes/issues/24030#issuecomment-210197450 https://github.com/kubernetes/kubernetes/issues/34978#issue-183517949