Closed cbrewster closed 2 months ago
One option to sync them all to the current leader would be to scale to 1 and back up to 3 for the KV itself.
nats stream update KV_deployments --replicas=1
Then scale back up.
nats stream update KV_deployments --replicas=3
If the issue is due to the use of direct gets one possible easy and quick workaround would be to edit the stream's config to turn direct gets off.
Anyways, If you know there's a likelihood of a single key being modified by two clients at the same time, and therefore want to use CAS to control the concurrent writes to the key, then there's a general argument that could be made that you do not want to use direct gets as you want to make sure you always get the latest possible value for the key when doing your get before the update.
One option to sync them all to the current leader would be to scale to 1 and back up to 3 for the KV itself.
Initially I thought this was a replication issue, but I was able to see the same node flip back and forth between the two different responses (one current and one stale).
If the issue is due to the use of direct gets one possible easy and quick workaround would be to edit the stream's config to turn direct gets off.
Anyways, If you know there's a likelihood of a single key being modified by two clients at the same time, and therefore want to use CAS to control the concurrent writes to the key, then there's a general argument that could be made that you do not want to use direct gets as you want to make sure you always get the latest possible value for the key when doing your get before the update.
We also have a service which subscribes to the KV using a Watcher and keeps its own version of the table in memory. We noticed that after the upgrade, these could also have stale records (hours old). We do have some retries around CAS operations but we can try disabling direct get. It just seems odd to me that the stale records are so old, I wouldn’t expect direct get to be causing the issue I’m seeing, but maybe I misunderstand how it works?
I'm not sure if it is related, but we are seeing error messages when updating values (nats-server 2.10.2):
➜ ~ nats kv add demo
Information for Key-Value Store Bucket demo created 2023-10-26T10:54:39+02:00
Configuration:
Bucket Name: demo
History Kept: 1
Values Stored: 0
Backing Store Kind: JetStream
Bucket Size: 0 B
Maximum Bucket Size: unlimited
Maximum Value Size: unlimited
Maximum Age: unlimited
JetStream Stream: KV_demo
Storage: File
Cluster Information:
Name: C1
Leader: n1-c1
➜ ~ nats kv create demo k1 v1
v1
➜ ~ nats kv update demo k1 v2
nats: error: nats: wrong last sequence: 1
The last command needs to be nats kv update demo k1 v2 1
the 1
here is the revision of the previous value.
[rip@p1-lon]% nats kv create demo k1 v1
v1
~
[rip@p1-lon]% nats kv update demo k1 v2
nats: error: nats: wrong last sequence: 1
[rip@p1-lon]% nats kv update demo k1 v2 1
v2
I'll imrpove the error there
For some more info, we've discovered the 3 nodes have different data for these streams and are not self healing:
(via curl "nats-{1,2,3}.nats:8222/jsz?streams=true"
)
nats-0
{
"name": "KV_deployments",
"created": "2023-03-30T14:38:22.515826553Z",
"cluster": {
"name": "deployments-us-central1",
"leader": "nats-2",
"replicas": [
{
"name": "nats-1",
"current": false,
"active": 0,
"lag": 803083,
"peer": "yrzKKRBu"
},
{
"name": "nats-2",
"current": true,
"active": 91107098,
"lag": 803083,
"peer": "cnrtt3eg"
}
]
},
"state": {
"messages": 10108,
"bytes": 2549966,
"first_seq": 367,
"first_ts": "2023-03-30T15:06:59.251496804Z",
"last_seq": 836453,
"last_ts": "2023-10-26T09:08:23.456285522Z",
"num_subjects": 10108,
"num_deleted": 825979,
"consumer_count": 1
}
},
nats-1
{
"name": "KV_deployments",
"created": "2023-03-30T14:38:22.515826553Z",
"cluster": {
"name": "deployments-us-central1",
"leader": "nats-2",
"replicas": [
{
"name": "nats-0",
"current": false,
"active": 0,
"lag": 803083,
"peer": "S1Nunr6R"
},
{
"name": "nats-2",
"current": true,
"active": 719890186,
"lag": 803083,
"peer": "cnrtt3eg"
}
]
},
"state": {
"messages": 10100,
"bytes": 2541574,
"first_seq": 367,
"first_ts": "2023-03-30T15:06:59.251496804Z",
"last_seq": 836361,
"last_ts": "2023-10-26T09:08:23.324863022Z",
"num_subjects": 10100,
"num_deleted": 825895,
"consumer_count": 1
}
},
nats-2
{
"name": "KV_deployments",
"created": "2023-03-30T14:38:22.515826553Z",
"cluster": {
"name": "deployments-us-central1",
"leader": "nats-2",
"replicas": [
{
"name": "nats-0",
"current": true,
"active": 287742819,
"peer": "S1Nunr6R"
},
{
"name": "nats-1",
"current": true,
"active": 287704603,
"peer": "yrzKKRBu"
}
]
},
"state": {
"messages": 10107,
"bytes": 2549849,
"first_seq": 367,
"first_ts": "2023-03-30T15:06:59.251496804Z",
"last_seq": 836453,
"last_ts": "2023-10-26T09:08:23.456285522Z",
"num_subjects": 10107,
"num_deleted": 825980,
"consumer_count": 2
}
},
From our metrics we do observe that our replicas got out of sync on Monday. I did as @derekcollison mentioned to scale down to R1 and then back to R3 and it was able to re-synchronize. Though a bit concerning that we can enter this state and not recover without manual intervention.
nats_stream_total_messages{stream_name="KV_deployments"}
Around this time there were some node restarts and some logs about resetting of WAL state
Agree on drifting state, was trying to get you guys unblocked.
Did the scale down and up help?
Agree on drifting state, was trying to get you guys unblocked.
Did the scale down and up help?
Yup, all the replicas are tracking the primary as expected now
ok good. Other option is to snapshot and restore but that involves some minor downtime that I did not want you to incur.
We can not guarantee the drift will not happen again, but we feel with 2.10.3 and 2.10.4 coming today we are in a good spot and have fixed quite a few issues. My sense is the issue was there before the upgrade and persisted, hence my recommendation.
We can not guarantee the drift will not happen again, but we feel with 2.10.3 and 2.10.4 coming today we are in a good spot and have fixed quite a few issues. My sense is the issue was there before the upgrade and persisted, hence my recommendation.
We were fully on 2.10.3 at the time the desync happened, shown in the graph above. Are there known issues that will be fixed by 2.10.4?
We have observed a different problem, on 2.10.11
, but where the solution was exactly the same:
nats stream update KV_x --replicas=1
nats stream update KV_x --replicas=3
The issue we observed is that a 3 node deployment would report inconsistent KV counts - it turned out one of the 3 pods was not syncing:
nats -s "infra-nats.xxx:4222" kv ls active-controllers --creds tommy.creds | wc -l
246
nats -s "infra-nats.xxx:4222" kv ls active-controllers --creds tommy.creds | wc -l
246
nats -s "infra-nats.xxx:4222" kv ls active-controllers --creds tommy.creds | wc -l
1
So the same call made multiple times would return different results (or no data) depending on which of the 3 pods you hit. Scaling down the stream replicas to 1, and then back up to 3, forces a resync.
Sorry for piggybacking; I didnt want to create a new issue just yet, because the solution given above worked.
Fixed via #5821 on v2.10.19
Thanks much! Do you know when these releases typically make their way to the official helm repo?
@tommyjcarpenter the helm chart with the updated version is now available: https://github.com/nats-io/k8s/releases/tag/nats-1.2.3
Observed behavior
We have an R3 KV and we recently upgraded to NATS 2.10.3 from 2.9. We often use CAS operations to make updates to keys in the bucket. We'll read a key, make modifications, then update the key with the revision that was originally read. This has been working great until we upgraded to 2.10 and we started to see errors like
nats: wrong last sequence: 835882
. We also started to notice other inconsistent behavior on systems that read from the KV.When investigating further we were able to see inconsistent reads from the same NATS server for the same KV key. Here's an example of what I was observing:
While this view doesn't show the servers, we have another command which provides a UI on top of NATS for our use-case and it was able to demonstrate this inconsistency within the same server.
This only started after we upgraded to 2.10.x earlier this week so we think this is likely a regression.
We tried to roll back to 2.9.23, but it seems that there's some data format inconsistencies that prevented us from doing so:
This has been causing a lot of weird errors for us since the data can be stale for many hours. We've seen a NATS server restart help the problem but it doesn't prevent the issue from happening again.
KV Config:
Expected behavior
We'd expect the same NATS server to respond with consistent values after being written and not show stale values minutes/hours after they were updated.
Server and client version
Server: 2.10.3
Host environment
We're running NATS on GKE using the 0.1.x helm chart
Steps to reproduce
Unfortunately we do not have exact steps to reproduce at the moment, but we will attempt to find a reproduction soon. Help before then would be appreciated. Either to fix forward on 2.10.x or to be able to roll back to 2.9.x until the fix is in place.