Open marload opened 1 year ago
It's always good to have some retention policy in NATS and have limits set for JetStream.
You can set the retention policy for stream, which is always a good idea:
If you set more than one, the first one triggered will kick in.
You can also set a work queue (so messages are deleted after they're consumed) or interest based stream.
you can also purge (remove messages, but leave stream itself intact) or delete some streams.
More info in the docs: https://docs.nats.io/using-nats/developer/develop_jetstream/model_deep_dive
@Jarema
Thank you for your answer. I would like to apply Message Retention to NATS Stream, but I can't find any documentation on how to actually set it up. Do you know where I can find that documentation?
I too am running into similar issue in a test environment but with "max age of messages" set to 2 mins. We have a service publishes several MB of data every minute to NATS but it seems ignoring the setting.
The NATS cluster (3 nodes) is running on Version: 2.9.20 and is deployed via k8s with the NATS helm chart v0.19.17. The JetStream is setup a single replica with file storage.
Nothing stand out from the NATS log
2023-09-03 21:22:17.873 | nats_foo-nats-0 [7] 2023/09/03 21:22:17.873257 [INF] JetStream cluster new consumer leader for '$G > FOO > 7xx4brUh' |
-- | -- | --
| | 2023-09-05 13:03:02.335 | nats_foo-nats-0 [7] 2023/09/05 13:03:02.335026 [ERR] JetStream failed to store a msg on stream '$G > FOO': write /data/jetstream/$G/streams/FOO/msgs/53624.blk: no space left on device
What other info I can provide to help debug this issue? Thanks
Looks like the resources for JetStream might not have matched reality and so you ran out of disk space which will cause the server to disable JetStream.
Double check all of the maximums for store and memory resources for JetStream and compare to the disk you are using.
I am using NATS with JetStream. I'm deploying to kubernetes via helm. JetStream's fileStorage is 10GB and memStorage is 2GB. This is where the low disk issue comes in.
How can I fix this, can I empty the disk in NATS or create a retention of the data?