dotnet / MQTTnet

MQTTnet is a high performance .NET library for MQTT based communication. It provides a MQTT client and a MQTT server (broker). The implementation is based on the documentation from http://mqtt.org/.
MIT License
4.39k stars 1.05k forks source link

Client Receiving Rate #1861

Open ToygarVarli opened 10 months ago

ToygarVarli commented 10 months ago

Hi there , i am try to achieve best performance while receiving messages from server. i want'to know how many messages received in regular client per/second.

My Setup, -EMQX Broker Inside Docker -Net Core 6.0 Console App with 4.3.1.873 MQTTNet Nuget Library -https://github.com/inovex/mqtt-stresser - MQTT Stress Tester App -İ7 1265U CPU / 32G memory , MQTTNET Application Running in Release Mode

I'am running Inovex Strees Tester with these parameters ,

docker run --rm inovex/mqtt-stresser -broker tcp://emqx:1883 -username admin -password public -num-clients 200 -num-messages 250 -rampup-delay 0s -rampup-size 200 -global-timeout 180s -timeout 2s -publisher-qos 2

And this is my net core Implementation

var factory = new MqttFactory();

int cnt = 0;
var mqttClient = factory.CreateMqttClient();
var options = new MqttClientOptionsBuilder()
    .WithClientId(Guid.NewGuid().ToString())
    .WithTcpServer("localhost", 1883)
    .WithCredentials("admin", "public")
    .Build();

var list = new List<MqttTopicFilter>();
list.Add(new MqttTopicFilter { Topic = "#" });
await mqttClient.ConnectAsync(options);

await mqttClient.SubscribeAsync(new MqttClientSubscribeOptions()
{
    TopicFilters = list,

});

Stopwatch sw = new Stopwatch();
mqttClient.ApplicationMessageReceivedAsync += async e =>
{
    if (!e.ApplicationMessage.Topic.Contains('$'))
    {
        Interlocked.Increment(ref cnt);
        if (cnt == 1) { sw.Start(); }
        if (cnt == 50000)
        {
            sw.Stop();
            cnt = 0;
            await Console.Out.WriteLineAsync(sw.Elapsed.TotalSeconds.ToString());
            sw.Reset();

        }
    }
};
Console.ReadLine();

This is Inovex Output

image

And MQTTNet Client Says - 50K Messages Received in 1.45 / Sec image

Which project is your question related to?

logicaloud commented 10 months ago

It is difficult to compare performance unless other setup parameters remain the same (i.e., same hardware, same number of clients, same topics, same QOS, etc.), and performance in this test case probably depends more on the broker than on the client. In that sense, there is no "regular" client.

It would be interesting to see what numbers you get after adjusting the MQTTnet example to use 200 MQTTnet clients instead of a single client (or possibly run the single MQTT net client while the mqtt stresser is also running).

logicaloud commented 10 months ago

I have tried running mqtt-stresser (container version) against EMQX and MQTTnet, but when run against MQTTnet then mqtt-stresser does not complete the test more often than not (at least with the given parameters). After some digging I found that mqtt-stresser seems to reconnect with the same client ID at times, which results in a session take-over in MQTTnet; session take-over is a MQTT 5 feature that is not well defined in MQTT 3.1.1, so there may be some incompatibility here. Needs more investigation.

The following are some benchmarks that are part of the MQTTnet library run against EMQX and also MQTTnet. My setup would communicate from the MQTTnet client to an Ubuntu virtual machine which either had a docker container with EMQX running or a docker container with MQTTnet running. I got those results.

Edit: You can ignore the NumTopicsPerPublisher here, see remarks in the post below.

image

Legend:

  Method NumTopicsPerPublisher NumPublishers NumSubscribers NumTopicsPerSubscriber
a Message Delivery 1 1000 10 5
b Message Delivery 1 1000 10 10
c Message Delivery 1 1000 10 20
d Message Delivery 1 1000 10 50
e Message Delivery 1 10000 10 5
f Message Delivery 1 10000 10 10
g Message Delivery 1 10000 10 20
h Message Delivery 1 10000 10 50
i Message Delivery 5 1000 10 5
j Message Delivery 5 1000 10 10
k Message Delivery 5 1000 10 20
l Message Delivery 5 1000 10 50
m Message Delivery 5 10000 10 5
n Message Delivery 5 10000 10 10
o Message Delivery 5 10000 10 20
p Message Delivery 5 10000 10 50
q Message Processing (send 10000 messages, result scaled by factor * 0.1)
r Susbcribe to 10000 topics
s Unsubscribe from 10000 topics

Data:

  EMQX [ms] MQTTNet [ms]
     
a 1.248 1.167
b 2.223 2.107
c 4.220 4.015
d 10.407 9.809
e 1.327 1.293
f 2.317 2.541
g 4.447 4.640
h 11.120 11.940
i 1.227 1.180
j 2.223 2.120
k 4.195 3.995
l 10.348 9.901
m 1.274 1.264
n 2.268 2.372
o 4.536 4.641
p 11.145 11.520
q 7.297 7.348
r 11.100 1.971
s 1.660 1.957
ToygarVarli commented 10 months ago

@logicaloud logicaloud Hi There ,

Lets look j Line in second table.

EMQX - 2.223 MQTTNET - 2.120

I think MQTTNet Column represents , mqttnet broker in second table.

So ,

if i run EMQX Broker And Send 5(topic) * 1000(publisher) = 5000 messages , and mqttnet client already subscribed to emqx broker , 5000 message processing takes 2.223.

or if i run MQTTNET Broker And Send 5(topic) * 1000(publisher) = 5000 messages , and mqttnet client already subscribed to MQTTNET broker , 5000 message processing takes 2.120.

Am i interpret right your test results ?

logicaloud commented 10 months ago

Good point. The number of topics per publisher is irrelevant for message delivery, and the number of publishers simply increases the number of client connections that the broker needs to hold. So the number of messages that are published and delivered are NumberOfSubscribers * NumberOfSubscriberTopics, ranging from 50 to 500 for the various tests. When I normalise the tests to messages per second ( NumberOfSubscribers * NumberOfSubscriberTopics / ms * 1000) then I get around 40k - 50k messages per second for my specific setup.

The original intention for the message delivery tests was to measure topic filtering performance of the broker, and the NumTopicsPerPublishers was meant increase the number of topics that do not have subscribers, to see how well the broker eliminates those. While looking at the sources, I stumbled across a bug in the test though, and it seems that topics without subscribers are not published, which means there is no extra publishing and filtering work on the broker side. It would be interesting to see what the results are when these extra topics are included. I may re-run the tests including these extra topics and update the numbers.

logicaloud commented 10 months ago

Here are some numbers with the NumTopicsPerPublisher fixed, meaning that there are many topics published that do not have any subscribers. The bulk of the time measured seems to be the overhead of publishing and processing the extra topics on client and server side now, since all runs on the same physical computer. Performance remains comparable. Best to run the tests in your own environment with your likely scenario to get a feel for what performance you may expect.

I should also note that MQTTnet is currently geared towards "fan-in", that is, having many publishers and few subscribers; for fan-out scenarios, performance will probably be not quite as good.

EMQX Method NumTopicsPerPublisher NumPublishers NumSubscribers NumTopicsPerSubscriber Mean Error StdDev
Message Delivery 1 1000 10 5 17.35 ms 0.132 ms 0.123 ms
Message Delivery 1 1000 10 10 17.58 ms 0.248 ms 0.232 ms
Message Delivery 1 1000 10 20 18.09 ms 0.146 ms 0.137 ms
Message Delivery 1 1000 10 50 19.18 ms 0.173 ms 0.154 ms
Message Delivery 5 1000 10 5 73.01 ms 0.922 ms 0.862 ms
Message Delivery 5 1000 10 10 72.70 ms 0.903 ms 0.845 ms
Message Delivery 5 1000 10 20 74.02 ms 1.017 ms 0.951 ms
Message Delivery 5 1000 10 50 74.66 ms 1.172 ms 0.979 ms
MQTTnet Method NumTopicsPerPublisher NumPublishers NumSubscribers NumTopicsPerSubscriber Mean Error StdDev
Message Delivery 1 1000 10 5 17.32 ms 0.191 ms 0.179 ms
Message Delivery 1 1000 10 10 17.95 ms 0.294 ms 0.275 ms
Message Delivery 1 1000 10 20 18.39 ms 0.226 ms 0.211 ms
Message Delivery 1 1000 10 50 19.55 ms 0.346 ms 0.355 ms
Message Delivery 5 1000 10 5 74.22 ms 0.558 ms 0.522 ms
Message Delivery 5 1000 10 10 73.85 ms 0.427 ms 0.399 ms
Message Delivery 5 1000 10 20 74.52 ms 0.533 ms 0.499 ms
Message Delivery 5 1000 10 50 76.09 ms 0.859 ms 0.803 ms