Closed thedevop closed 11 months ago
I've merge the this PR to my project. Let me test on my env. I make 120 clients to send 2500 messages for every one. And then I make 120 clients slice as four groups. every group will subscribe all the message. In other words, the broker will send 1,200,000 messages to the 120 clients. Let me observe for one day.
By the way, I've change the bconn to "*bufio.Reader"
It's been 20 hours, and the new code is running well in the test environment. 120 clients sent 19.4 billion messages to the broker. Four groups of subscribers subscribed to 95.8 billion messages from the broker, and there was no data loss. I have set my buffer size to 10,485,760. The CPU usage of the broker process is maintained at around 3800%, and the memory usage is approximately 6.5 GB. The new code doesn't show a significant difference in performance compared to the previous code that used bufio. I look forward to someone simulating a scenario with a large number of client connections.
This is super fascinating and while I've been very busy with work, I have been following it and I'm interested to see how it goes. I don't have any environments for testing on the sort of scale you are both using, so very appreciative for feedback and statistics!
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % | ||
---|---|---|---|---|---|
clients.go | 24 | 31 | 77.42% | ||
<!-- | Total: | 24 | 31 | 77.42% | --> |
Totals | |
---|---|
Change from base Build 7076132597: | -0.1% |
Covered Lines: | 5470 |
Relevant Lines: | 5537 |
I made the following changes:
Although my environment has tens of millions clients, but mostly it's msgs from client->server, very little server->client msgs, so I can't evaluate true impact neither.
@mochi-co , the perf test we seen is at least 5% improvement in publishing rate for busy clients. It is ready to review/merge.
@mochi-co , I don't think this will reach 6 reviewers. So when you have a chance, can you pl merge. I have some follow up changes that's not related to this buffer but can improve publishing rate, like reduce writing payload during encoding, and use of sync.Pool to reduce memory allocations/GCs during packet encoding.
Yup looks good to me, just haven't had a chance - I'll merge and release now!
Released in v2.4.3, thank you @thedevop!
I've merge the this PR to my project. Let me test on my env. I make 120 clients to send 2500 messages for every one. And then I make 120 clients slice as four groups. every group will subscribe all the message. In other words, the broker will send 1,200,000 messages to the 120 clients. Let me observe for one day.
By the way, I've change the bconn to "*bufio.Reader"