Closed marvin-hansen closed 9 months ago
Ran benchmark:
cargo run --release -- --num-of-streams 1 --message-size 32 --num-of-messages 10000000
Number of Messages: 10,000,000 Number of Streams: 1 Message Size (Bytes): 32 Batching Enabled: false Compression Enabled: false
| Duration | Total Transferred | Avg. Throughput | Avg. Latency | | 8.5470 Secs | 305.18 MB | 35.71 MB/s | 854.70 ns |
That's about 1.18 million msg/sec.
Thanks for the feedback! We do need to address this more clearly in documentation to set expectations. We've been hesitant to do this prior to reaching 1.0 as we anticipate some improvement, but it's probably worth publishing now and revising every so often.
Hi,
I know it's a new project, but what are the estimated numbers of throughput and latency on a normal dev machine?
The documentation isn't mentioning anything, but it really helps having at least a paragraph to get an idea of the order of magnitude.
Are we talking milliseconds or microseconds latency?
What's the normal throughput for 128 byte messages vs 512 bytes long messages?
Big kudos for the default support of rpc, it's a very meaningful features.