Open KhurramShahzadODM opened 1 year ago
NATS will send and receive messages based on the resources it is given.
For your environment I might recommend a virtual interface that limits the bandwidth to what is desired.
You could also use JetStream to throttle inbound to some degree.
Well we are using Jetstream with disk persistence for our use case. I have observed that when we deploy nats server on sidecar it's network usage is around 2 Gbps whereas it goes up to 7Gbps incase of nats server deployed over same machine. Is it because of loopback?
K8S adds alot to the networks stack and can cause latency and throughput issues in some circumstances.
How did you deploy NATS into K8S?
Actually we haven't deployed it over K8S. There are two approaches regarding nats server deployment.
We have same number of messages/stream configuration and message size in both cases but Network contention is 3x higher in case 2 as compared to case 1. What would be cause is it loopback? Secondly is there any parameter to configure the network traffic from NATS client.
Loopback will be faster of course, what is the network link between the two machines when they are deployed on different machines?
Have you tested the io bandwidth between machines through independent tooling?
Is this being deployed to a cloud provider? If so which one and what instance types are being used?
We haven't deployed over cloud, yes we would consider testing using i/o tools but the point we want to limit the rate of bits/sec or no of messages over nats server pub(inbound).. is there any implementation of following? We have seen ratelimit but it's consumer based.
We do not have one, and I would suggest looking into virtual network interfaces to achieve this or some other external mechanism.
We have looked into adopting the rate limiter that is on a JetStream consumer to NATS core inbound client connections, but this work has not been prioritized.
Hi, we are fond of NATS JetStreaming the way it processes 100 - 1000 bytes messages compared to its competitors. Our target is to capitalise its potential for OLTP use case (as IPC) under financial transactions processing requirement. Likewise consider it too for enterprise logging architecture. The thing we have noticed during in-house benchmark activity using custom testing tools is, it translated high traffic work load to Network IO aggressively. Our tool pumped 1 million messages around in 59 seconds of 100 bytes each, over a Windows Server 2019 having configuration i.e. 6 virtual core and 16 GB of RAM, 7200 RPM SATA Disk. Hence the bandwidth of 10 Gbps Ethernet is consumed up to 7 Gbps of network IO by NATS JetStreaming. Our used NATS JetStream version is ?? We found the following two configuration options if wanted to curb/control NATS JetStreaming traffic on Ethernet: max_outstanding_catchup: This one is not working if we have correctly assumed it for our problem too (https://nats.io/blog/nats-server-29-release/). GOMEMLIMIT: This variable is primarily used to curb memory in a containerised environment. Not sure if it will also limit network IO specially in our case that’s out of container (NATS Server on Windows OS).