Open tahsinrahman opened 5 years ago
If i scale down nats streaming cluster to size 1, the pod pharmer-cluster-1
runs.
$ kubectl logs -f pharmer-cluster-1
[1] 2019/06/27 05:30:26.817748 [INF] STREAM: Starting nats-streaming-server[pharmer-cluster] version 0.12.2
[1] 2019/06/27 05:30:26.817803 [INF] STREAM: ServerID: pleessm07DDglxq2tYFk5b
[1] 2019/06/27 05:30:26.817808 [INF] STREAM: Go version: go1.11.6
[1] 2019/06/27 05:30:26.817811 [INF] STREAM: Git commit: [4489c46]
[1] 2019/06/27 05:30:26.835108 [INF] STREAM: Recovering the state...
[1] 2019/06/27 05:30:26.835336 [INF] STREAM: No recovered state
[1] 2019/06/27 05:30:27.086859 [INF] STREAM: Message store is FILE
[1] 2019/06/27 05:30:27.086882 [INF] STREAM: Store location: store
[1] 2019/06/27 05:30:27.086941 [INF] STREAM: ---------- Store Limits ----------
[1] 2019/06/27 05:30:27.086945 [INF] STREAM: Channels: 100 *
[1] 2019/06/27 05:30:27.086948 [INF] STREAM: --------- Channels Limits --------
[1] 2019/06/27 05:30:27.086951 [INF] STREAM: Subscriptions: 1000 *
[1] 2019/06/27 05:30:27.086954 [INF] STREAM: Messages : 1000000 *
[1] 2019/06/27 05:30:27.086956 [INF] STREAM: Bytes : 976.56 MB *
[1] 2019/06/27 05:30:27.086959 [INF] STREAM: Age : unlimited *
[1] 2019/06/27 05:30:27.086962 [INF] STREAM: Inactivity : unlimited *
[1] 2019/06/27 05:30:27.086964 [INF] STREAM: ----------------------------------
[1] 2019/06/27 05:31:04.613144 [INF] STREAM: Channel "create-cluster" has been created
[1] 2019/06/27 05:31:04.616178 [INF] STREAM: Channel "delete-cluster" has been created
[1] 2019/06/27 05:31:04.618912 [INF] STREAM: Channel "retry-cluster" has been created
but when i scale up to size 2/3, the first pod remains running, but other pods fails
$ kubectl logs -f pharmer-cluster-2
[1] 2019/06/27 05:39:05.591052 [INF] STREAM: Starting nats-streaming-server[pharmer-cluster] version 0.12.2
[1] 2019/06/27 05:39:05.591103 [INF] STREAM: ServerID: EZj3gVhakNMkk7k81uEP9K
[1] 2019/06/27 05:39:05.591107 [INF] STREAM: Go version: go1.11.6
[1] 2019/06/27 05:39:05.591110 [INF] STREAM: Git commit: [4489c46]
[1] 2019/06/27 05:39:05.610442 [INF] STREAM: Recovering the state...
[1] 2019/06/27 05:39:05.610731 [INF] STREAM: No recovered state
[1] 2019/06/27 05:39:05.610840 [INF] STREAM: Cluster Node ID : "pharmer-cluster-2"
[1] 2019/06/27 05:39:05.610846 [INF] STREAM: Cluster Log Path: pharmer-cluster/"pharmer-cluster-2"
[1] 2019/06/27 05:39:10.720020 [INF] STREAM: Shutting down.
[1] 2019/06/27 05:39:10.720534 [FTL] STREAM: Failed to start: failed to join Raft group pharmer-cluster
Having this exact same issue on EKS, except it happens without any upgrade. What kubernetes version did you upgrade to?(I'm on 1.12 experiencing this)
we upgraded to 1.13.6-gke.13
Been having this issue on 1.11, 1.12, and 1.13 in EKS. Been digging and not really sure what the root cause is.
I'm having this issue on 1.14 with K3S as well.
same with k3s v1.17.4+k3s1
we're running nats streaming operator in gke, it was running fine. But after we updated kubernetes version. nats clients are not able to connect to nats server.
From nats client