Open a-alinichenko opened 3 weeks ago
How many JS API operations are you doing per second? You can get a sense of that from nats traffic
under the system account.
@derekcollison I think this metric shows this
ok thanks, that would not cause those repeated messages that it can't catch up to the meta leader. Something else is going on that is not obvious to us.
@derekcollison Should I provide some other details? NATS installed in k8s GCP GKE. Applications that use it in the same cluster, in the same network/region. We have many applications there and have problems only with NATS.
We also have the same configuration of the NATS cluster with the same applications in another region with a lower load. And with a lower load we don't see such problems with NATS.
Let's see if @wallyqs could spend some time on a zoom call with you.
Hi @a-alinichenko ping me at wally@nats.io
and we can try to take a look at the setup/scenario.
@a-alinichenko I wonder if this could be related to the readinessprobe as well, so instead of the current default in the helm charts could change it into this:
readinessProbe:
httpGet:
path: /
@wallyqs Thank you for your answer! I can change it on our side and test it. But is it fine to have such warnings as I described in the logs? Will it not be the cause of any problems if I just change the healthcheck parameters?
@a-alinichenko in the v2.11 version we have changed the readiness probe to not be as sensitive to changes and avoid the errors that you posted, but for v2.10 what I shared would work better to avoid the k8s service detaching when there is a lot of activity.
@wallyqs, thanks for the clarification!
Observed behavior
Each NATS cluster restart triggers this problem. When we restart NATS cluster we have the next logs and the pod doesn't work:
At this time consumer does not read messages and they are collected on a disk.
Expected behavior
Restarting the pod does not lead to operational problems.
Server and client version
Nats server version is 2.10.21 Golang import version is http://github.com/nats-io/nats.go v1.34.1
Host environment
Filter subject: 1
Stream info example:
Installed via official helm chart in k8s. 7 pods in the cluster. 7 streams (1 for each pod) placed by tags to different pods.
Allocated resources for each pod: CPU - 4 cores Mem - 15 GiB
Current load: 6000-7000 messages / seconds. The Prometheus query to count this:
Steps to reproduce
Just restart a pod under the high load