-
### Observed behavior
js.Publish("test", []byte("hi"), nats.RetryWait(5*time.Second), nats.RetryAttempts(10))
It gives up at first attempt itself after waiting for 5 second.
### Expected behavio…
-
### Observed behavior
## Benchmark the nats-server with the command:
```
nats bench --js --pub 5 --size 1024 --msgs 1000000 --dedup --stream mqtt_publish-0 mqtt_publish --multisubject
```
##…
-
### What version were you using?
nats helm chart 1.1.10
### What environment was the server running in?
OpenShift , amd64
### Is this defect reproducible?
Yes.
In a OpenShift cluster with non ro…
-
### Observed behavior
Hello.
We have detected high disk write load on some servers of the NATS cluster. (up to 100 MB/s)
Restarting the NATS server helps reduce the disk load.
We had no id…
-
### Observed behavior
To use just two instances as examples:
1. At class `NatsJSContext`, methods `CreateStreamAsync` and `UpdateStreamAsync` return different types:
```csharp
public a…
-
### Proposed change
When the server (debug logging enabled) is restarted with low diskspace, we get these messages:
```
nats-0 nats [1] 2023/12/04 12:57:55.395683 [DBG] RAFT [S1Nunr6R - S-R3F-njKSh…
-
### Observed behavior
I saw that my client used 67GB memory, first of all I checked pprof and got the result:
Also in client logs sometimes I see this: `context deadline exceeded`
### …
-
I just read in the DB how data changes are not spread out.
NATS Jetstream would do this for you in a fault tolerant way.
https://github.com/maxpert/marmot/ does this for SQLITE using NATS Jetst…
-
### Observed behavior
In an R3 cluster, when creating a work queue policy stream with explicit acks and a push consumers, it's possible to lose messages. The way to replicate it is to create and de…
-
### Observed behavior
We experience strange nats jetstream behavior: it periodically produces huge amount of disk IO (reads and writes) even without any clients connected (0 producers, 0 consumers …