eclipse / paho.mqtt.c

An Eclipse Paho C client library for MQTT for Windows, Linux and MacOS. API documentation: https://eclipse.github.io/paho.mqtt.c/
https://eclipse.org/paho
Other
1.95k stars 1.09k forks source link

AsyncClient sends periodic messages too slow with QoS 2 and maxInflight 1 #1355

Open amslinger-xitaso opened 1 year ago

amslinger-xitaso commented 1 year ago

Describe the bug In my application, I am periodically sending update messages every 200ms on multiple topics. However, the client subscribed to the topic receives batches of 3 messages every 600ms. Furthermore, it seems that there is a backlog of messages even though the system should be fast enough. This behavior can also be observed with MQTT Explorer.

To Reproduce I have tried creating a minimal example using the async C library directly. I managed to recreate the behavior of messages not being sent as fast as requested. (In the example below I receive 1 message/400ms even though the sleep is only 200ms.) However, as soon as I remove the other (real) clients the batching no longer occurs. I did not manage to write a minimal second client that causes the batching.

#include "MQTTAsync.h"

#include <cstdlib>
#include <iostream>
#include <string>
#include <thread>

void publishMessage(const MQTTAsync& client, const std::string& topic, const std::string& payload)
{
    MQTTAsync_message pubmsg = MQTTAsync_message_initializer;
    pubmsg.payload = (void*)payload.c_str();
    pubmsg.payloadlen = payload.length();
    pubmsg.qos = 2;
    MQTTAsync_sendMessage(client, topic.c_str(), &pubmsg, nullptr);
}

int main()
{
    MQTTAsync client;
    MQTTAsync_create(&client, "tcp://localhost:1883", "PeriodicPublisher", MQTTCLIENT_PERSISTENCE_NONE, NULL);
    MQTTAsync_connectOptions conn_opts = MQTTAsync_connectOptions_initializer;
    conn_opts.maxInflight = 1;
    MQTTAsync_connect(client, &conn_opts);

    std::this_thread::sleep_for(std::chrono::milliseconds(1000));

    while (true)
    {
        std::string timestamp = std::to_string(std::chrono::system_clock::now().time_since_epoch().count());
        publishMessage(client, "PeriodicTopicA", "Periodic Message A " + timestamp);
        // Inserting a small sleep here makes everything work as expected.
        publishMessage(client, "PeriodicTopicB", "Periodic Message B " + timestamp);
        // The 200 can be any number. It will always take twice as long.
        std::this_thread::sleep_for(std::chrono::milliseconds(200));
    }

    MQTTAsync_destroy(&client);
    return 0;
}

Expected behavior I expect the messages to be received at the publish interval on both topics without any batching.

Environment (please complete the following information):

Additional context After some rudimentary debugging, it seems that the sending thread is stuck waiting for send_sem. Indeed, after removing the timeout = 1000 line from MQTTAsync_sendThread, which apparently results in 10ms timeout by default, the other client will receive 1 message per topic every 200ms.

jumoog commented 1 year ago

max_inflight_messages The maximum number of outgoing QoS 1 or 2 messages that can be in the process of being transmitted simultaneously. This includes messages currently going through handshakes and messages that are being retried. If set to 1, this will guarantee in-order delivery of messages.

QoS 2 is a four-part handshake. Of course, it takes much longer to process the message and because only one message can be processed at a time, a traffic jam occurs. Is the broker running on the same computer? Then you may have the following problem: #530

amslinger-xitaso commented 1 year ago

I am aware that QoS 2 means a higher network load. However, the example code only publishes a total of 10 msgs/s and my real application maybe 20 msgs/s on a dedicated broker. Surely, MQTT has to be able to handle that. The broker is indeed running on the same computer (this might change in the future, but network latency should always be <1 ms). I have also considered if Nagle's Algorithm is the problem. It would surely be worth a try. However, I think this issue is more likely to be some kind of race condition/synchronization issue. If I insert a small delay between both messages (or remove the timeout), communication happens a lot faster, which I would not expect if the issue is caused by the network stack. Furthermore, I have stopped investigation into this cause, since according to the internet Nagle's Algorithm is not active for localhost connections on Windows 10.

icraggs commented 1 year ago

Are you sure the "delay" is in the publishing application and not broker or subscriber?

A protocol level trace of the publisher might show something. On my machine, a part of the protocol trace output using your test snippet above is pretty much what you intended, I think:

20230505 131746.042 3 PeriodicPublisher -> PUBLISH msgid: 6 qos: 2 retained: 0 rc 0 payload len(38): Periodic Message B 1
20230505 131746.042 3 PeriodicPublisher <- PUBREC msgid: 6
20230505 131746.042 3 PeriodicPublisher -> PUBREL msgid: 6 (0)
20230505 131746.043 3 PeriodicPublisher <- PUBCOMP msgid:6
20230505 131746.242 3 PeriodicPublisher -> PUBLISH msgid: 7 qos: 2 retained: 0 rc 0 payload len(38): Periodic Message A 1
20230505 131746.243 3 PeriodicPublisher <- PUBREC msgid: 7
20230505 131746.243 3 PeriodicPublisher -> PUBREL msgid: 7 (0)
20230505 131746.243 3 PeriodicPublisher <- PUBCOMP msgid:7
20230505 131746.442 3 PeriodicPublisher -> PUBLISH msgid: 8 qos: 2 retained: 0 rc 0 payload len(38): Periodic Message B 1
20230505 131746.443 3 PeriodicPublisher <- PUBREC msgid: 8
20230505 131746.443 3 PeriodicPublisher -> PUBREL msgid: 8 (0)
20230505 131746.443 3 PeriodicPublisher <- PUBCOMP msgid:8
20230505 131746.643 3 PeriodicPublisher -> PUBLISH msgid: 9 qos: 2 retained: 0 rc 0 payload len(38): Periodic Message A 1
20230505 131746.643 3 PeriodicPublisher <- PUBREC msgid: 9
20230505 131746.643 3 PeriodicPublisher -> PUBREL msgid: 9 (0)
20230505 131746.643 3 PeriodicPublisher <- PUBCOMP msgid:9
20230505 131746.843 3 PeriodicPublisher -> PUBLISH msgid: 10 qos: 2 retained: 0 rc 0 payload len(38): Periodic Message B 1
20230505 131746.843 3 PeriodicPublisher <- PUBREC msgid: 10
20230505 131746.843 3 PeriodicPublisher -> PUBREL msgid: 10 (0)
20230505 131746.843 3 PeriodicPublisher <- PUBCOMP msgid:10
amslinger-xitaso commented 1 year ago

I think you are observing the same issue as I do. Note that there is no sleep between the publish to PeriodicTopicA and PeriodicTopicB.

In your log: msgid 7 (A) is published at 131746.242 msgid 8 (B) is published at 131746.442 (200 ms later) msgid 9 (A) is published at 131746.643 (201 ms later) msgid 10 (B) is published at 131746.843 (200 ms later)

My expected behavior would be: msgid 7 (A) is published at 131746.242 msgid 8 (B) is published at 131746.243 (1 ms later, when PUBCOMP is received) msgid 9 (A) is published at 131746.442 (199 ms later) msgid 10 (B) is published at 131746.442 (~0 ms later, when PUBCOMP is received)

Just for completeness, here is a similar trace from my machine:

20230505 162004.699 580 PeriodicPublisher -> PUBLISH msgid: 1 qos: 2 retained: 0 rc 0 payload len(30): Periodic Message A 1
20230505 162004.699 580 PeriodicPublisher <- PUBREC msgid: 1
20230505 162004.699 580 PeriodicPublisher -> PUBREL msgid: 1 (0)
20230505 162004.700 580 PeriodicPublisher <- PUBCOMP msgid:1
20230505 162004.912 580 PeriodicPublisher -> PUBLISH msgid: 2 qos: 2 retained: 0 rc 0 payload len(30): Periodic Message B 1
20230505 162004.912 580 PeriodicPublisher <- PUBREC msgid: 2
20230505 162004.912 580 PeriodicPublisher -> PUBREL msgid: 2 (0)
20230505 162004.912 580 PeriodicPublisher <- PUBCOMP msgid:2
20230505 162005.123 580 PeriodicPublisher -> PUBLISH msgid: 3 qos: 2 retained: 0 rc 0 payload len(30): Periodic Message A 1
20230505 162005.124 580 PeriodicPublisher <- PUBREC msgid: 3
20230505 162005.124 580 PeriodicPublisher -> PUBREL msgid: 3 (0)
20230505 162005.124 580 PeriodicPublisher <- PUBCOMP msgid:3
20230505 162005.335 580 PeriodicPublisher -> PUBLISH msgid: 4 qos: 2 retained: 0 rc 0 payload len(30): Periodic Message B 1
20230505 162005.335 580 PeriodicPublisher <- PUBREC msgid: 4
20230505 162005.335 580 PeriodicPublisher -> PUBREL msgid: 4 (0)
20230505 162005.335 580 PeriodicPublisher <- PUBCOMP msgid:4

I have also tried modifying my timestamp in a way that matches the traces format. Now you can clearly see that a queue of messages is building up:

20230505 162137.480 572 PeriodicPublisher -> PUBLISH msgid: 1 qos: 2 retained: 0 rc 0 payload len(13): A 162137.480
20230505 162137.481 572 PeriodicPublisher <- PUBREC msgid: 1
20230505 162137.481 572 PeriodicPublisher -> PUBREL msgid: 1 (0)
20230505 162137.481 572 PeriodicPublisher <- PUBCOMP msgid:1
20230505 162137.694 572 PeriodicPublisher -> PUBLISH msgid: 2 qos: 2 retained: 0 rc 0 payload len(13): B 162137.480
20230505 162137.695 572 PeriodicPublisher <- PUBREC msgid: 2
20230505 162137.695 572 PeriodicPublisher -> PUBREL msgid: 2 (0)
20230505 162137.696 572 PeriodicPublisher <- PUBCOMP msgid:2
20230505 162137.895 572 PeriodicPublisher -> PUBLISH msgid: 3 qos: 2 retained: 0 rc 0 payload len(13): A 162137.694
20230505 162137.895 572 PeriodicPublisher <- PUBREC msgid: 3
20230505 162137.896 572 PeriodicPublisher -> PUBREL msgid: 3 (0)
20230505 162137.896 572 PeriodicPublisher <- PUBCOMP msgid:3
20230505 162138.108 572 PeriodicPublisher -> PUBLISH msgid: 4 qos: 2 retained: 0 rc 0 payload len(13): B 162137.694
20230505 162138.109 572 PeriodicPublisher <- PUBREC msgid: 4
20230505 162138.109 572 PeriodicPublisher -> PUBREL msgid: 4 (0)
20230505 162138.110 572 PeriodicPublisher <- PUBCOMP msgid:4
20230505 162138.320 572 PeriodicPublisher -> PUBLISH msgid: 5 qos: 2 retained: 0 rc 0 payload len(13): A 162137.894
20230505 162138.321 572 PeriodicPublisher <- PUBREC msgid: 5
20230505 162138.321 572 PeriodicPublisher -> PUBREL msgid: 5 (0)
20230505 162138.322 572 PeriodicPublisher <- PUBCOMP msgid:5
20230505 162138.532 572 PeriodicPublisher -> PUBLISH msgid: 6 qos: 2 retained: 0 rc 0 payload len(13): B 162137.894
20230505 162138.532 572 PeriodicPublisher <- PUBREC msgid: 6
20230505 162138.532 572 PeriodicPublisher -> PUBREL msgid: 6 (0)
20230505 162138.533 572 PeriodicPublisher <- PUBCOMP msgid:6
std::string getTimestamp()
{
    timeb now;
    ftime(&now);
    tm timeinfo;
    localtime_s(&timeinfo, &now.time);
    char buffer[64];
    strftime(buffer, sizeof(buffer), "%H%M%S", &timeinfo);
    snprintf(buffer + 6, sizeof(buffer) - 6, ".%.3hu ", now.millitm);
    return std::string(buffer);
}
icraggs commented 1 year ago

Ok yes. You've chosen to slow things down by having a max in flight of 1 as well. Increasing that allows the messages to flow quicker.

You can use the asynchronous nature of the API more. You can wait for the connect completion rather than sleeping. You can also wait for the publish completion before sending the next one, as in the following:

#include "MQTTAsync.h"

#include <cstdlib>
#include <iostream>
#include <string>
#include <thread>

void onSuccess(void* context, MQTTAsync_successData* response)
{
    MQTTAsync_message pubmsg = MQTTAsync_message_initializer;
    std::string timestamp = std::to_string(std::chrono::system_clock::now().time_since_epoch().count());
    std::string topic = "PeriodicTopicB";
    std::string payload = "Periodic Message B " + timestamp;

    pubmsg.payload = (void*)payload.c_str();
    pubmsg.payloadlen = payload.length();
    pubmsg.qos = 2;
    MQTTAsync_sendMessage(context, topic.c_str(), &pubmsg, nullptr);
}

void publishMessage(const MQTTAsync& client, const std::string& topic, const std::string& payload)
{
    MQTTAsync_message pubmsg = MQTTAsync_message_initializer;
    pubmsg.payload = (void*)payload.c_str();
    pubmsg.payloadlen = payload.length();
    pubmsg.qos = 2;

    MQTTAsync_responseOptions opts = MQTTAsync_responseOptions_initializer;
    opts.onSuccess = onSuccess;
    opts.context = client;
    MQTTAsync_sendMessage(client, topic.c_str(), &pubmsg, &opts);
}

int main()
{
    MQTTAsync client;
    MQTTAsync_create(&client, "tcp://localhost:1883", "PeriodicPublisher", MQTTCLIENT_PERSISTENCE_NONE, NULL);
    MQTTAsync_connectOptions conn_opts = MQTTAsync_connectOptions_initializer;
    conn_opts.maxInflight = 1;
    MQTTAsync_connect(client, &conn_opts);

    std::this_thread::sleep_for(std::chrono::milliseconds(1000));

    while (true)
    {
        std::string timestamp = std::to_string(std::chrono::system_clock::now().time_since_epoch().count());
        publishMessage(client, "PeriodicTopicA", "Periodic Message A " + timestamp);
        // Inserting a small sleep here makes everything work as expected.
        //publishMessage(client, "PeriodicTopicB", "Periodic Message B " + timestamp);
        // The 200 can be any number. It will always take twice as long.
        std::this_thread::sleep_for(std::chrono::milliseconds(200));
    }

    MQTTAsync_destroy(&client);
    return 0;
}
amslinger-xitaso commented 1 year ago

Thanks for your suggestions. Let me elaborate a little bit about the real application: There is one process that periodically publishes (non-primitive) sensor data of multiple sensors. There are multiple processes that are interested in this sensor data. Most processes only care for the data of one sensor. However, there also is a process that needs all sensor data. It is important that this client gets sensor data that was measured at the same time. It may not miss any data or get any duplicate data. So I tried the simple approach of just storing the data of the first sensors in variables and then running the processing when the last message is received. But now of course there is the problem that MQTT does not guarantee message order across topics by default. While I could maybe fix that with a little more complex logic (e.g. checking in every receive function if all the other data was already received), there is an even worse problem: Sometimes the client receives two messages of one sensor before receiving one message of another sensor. (And things might get even more complex in the future with a sensor that does not reliably/always provide sensor data and other processing sending control messages that also need to be in some kind of order.) And this is the issue I try to avoid using in_flight 1 in both the broker and the client. Now I am certain that the publisher client only sends its next A message after it has sent the B message published before. The same holds for the broker. I do not want to slow things down. I am aware that enforcing a message ordering always incurs some performance overhead. However, I would hope that 10msgs/s is possible on localhost.

Interestingly, your modified source code results in messages being published every 200ms as expected:

20230508 145138.914 576 PeriodicPublisher -> PUBLISH msgid: 1 qos: 2 retained: 0 rc 0 payload len(13): A 145138.913
20230508 145138.914 576 PeriodicPublisher <- PUBREC msgid: 1
20230508 145138.914 576 PeriodicPublisher -> PUBREL msgid: 1 (0)
20230508 145138.914 576 PeriodicPublisher <- PUBCOMP msgid:1
20230508 145138.915 576 PeriodicPublisher -> PUBLISH msgid: 2 qos: 2 retained: 0 rc 0 payload len(13): B 145138.915
20230508 145138.915 576 PeriodicPublisher <- PUBREC msgid: 2
20230508 145138.915 576 PeriodicPublisher -> PUBREL msgid: 2 (0)
20230508 145138.915 576 PeriodicPublisher <- PUBCOMP msgid:2
20230508 145139.128 576 PeriodicPublisher -> PUBLISH msgid: 3 qos: 2 retained: 0 rc 0 payload len(13): A 145139.128
20230508 145139.128 576 PeriodicPublisher <- PUBREC msgid: 3
20230508 145139.128 576 PeriodicPublisher -> PUBREL msgid: 3 (0)
20230508 145139.129 576 PeriodicPublisher <- PUBCOMP msgid:3
20230508 145139.129 576 PeriodicPublisher -> PUBLISH msgid: 4 qos: 2 retained: 0 rc 0 payload len(13): B 145139.129
20230508 145139.130 576 PeriodicPublisher <- PUBREC msgid: 4
20230508 145139.130 576 PeriodicPublisher -> PUBREL msgid: 4 (0)
20230508 145139.130 576 PeriodicPublisher <- PUBCOMP msgid:4
20230508 145139.342 576 PeriodicPublisher -> PUBLISH msgid: 5 qos: 2 retained: 0 rc 0 payload len(13): A 145139.341
20230508 145139.342 576 PeriodicPublisher <- PUBREC msgid: 5
20230508 145139.342 576 PeriodicPublisher -> PUBREL msgid: 5 (0)
20230508 145139.343 576 PeriodicPublisher <- PUBCOMP msgid:5
20230508 145139.343 576 PeriodicPublisher -> PUBLISH msgid: 6 qos: 2 retained: 0 rc 0 payload len(13): B 145139.343
20230508 145139.343 576 PeriodicPublisher <- PUBREC msgid: 6
20230508 145139.343 576 PeriodicPublisher -> PUBREL msgid: 6 (0)
20230508 145139.343 576 PeriodicPublisher <- PUBCOMP msgid:6

Unfortunately, this does not solve my problem. Now it is once again possible that two A messages are sent before one B message if the PUBCOMP is delayed for some reason. Of course I could now use a queue for the messages and only sent the A message if the queue is empty and append it to the queue instead if it is not empty. But then this queue would also need to be thread safe. Which gives me the feeling that I am just replacing a broken thread safe queue in the library with a new thread safe queue of my own. I assume that your solution is only working quickly, since the MQTTAsync_sendMessage call (indirectly?) posts the send_sem which MQTTAsync_sendThread is always waiting for when I debug it.

(I already wait for the connection to be established in my full application, the sleep in the reduced code is just for simplicity)

icraggs commented 1 year ago

You set max inflight messages to 1 and use QoS 2 which means there are always going to be delays. The B message is prevented from being sent immediately by the max inflight message count.

For the process that receives messages from multiple sensors, I don't think there is any way to avoid some reconciliation at the subscribing end. While QoS values of 1 and 2 provide some guarantees between client and broker, there is no MQTT guarantee at the broker end of the forwarding of messages from different sources.

You said:

Unfortunately, this does not solve my problem. Now it is once again possible that two A messages are sent before one B message if the PUBCOMP is delayed for some reason.

My code was just an example. You can wait for the PUBCOMP from sending the B message by waiting for the onSuccess callback for that message too.

amslinger-xitaso commented 1 year ago

In general, there is no guarantee how the broker handles the order of QOS 2 messages. However, the Mosquitto documentation states: "max_inflight_messages count The maximum number of outgoing QoS 1 or 2 messages that can be in the process of being transmitted simultaneously. This includes messages currently going through handshakes and messages that are being retried. Defaults to 20. Set to 0 for no maximum. If set to 1, this will guarantee in-order delivery of messages." Therefore, if I set in_flight 1 in the client and max_inflight_messages 1 in the broker, I should get the in-order delivery I want. In my testing, I have not observed any out-of-order messages, yet. One thing to note is that (currently) one process is reading all sensors (since they are just connected to a PC). Therefore, only a single process is actually publishing (this might change in the future which might cause issues if e.g. one process freezes).

Anyway, I think this discussion (while it might be helpful for future design decisions in my application) is taking us off the path to find the bug. I am fairly certain that my behavior I am observing is a multithreading issue in a single process. If I remove the timeout = 1000 line in https://github.com/eclipse/paho.mqtt.c/blob/master/src/MQTTAsyncUtils.c#L1812 MQTTAsync_sendThread, then messages are sent quickly without any need to wait for async completions or any changes to broker configuration. Of course busy waiting is not really a "bugfix". Do you know if is possible to wake the thread from its sleep, when PUBCOMP is received? (I would expect this to happen in the library itself.) This is how the trace looks without timeout = 1000 and both messages published in main (like my first post):

20230510 152803.875 580 PeriodicPublisher -> PUBLISH msgid: 1 qos: 2 retained: 0 rc 0 payload len(13): A 152803.875
20230510 152803.876 580 PeriodicPublisher <- PUBREC msgid: 1
20230510 152803.876 580 PeriodicPublisher -> PUBREL msgid: 1 (0)
20230510 152803.877 580 PeriodicPublisher <- PUBCOMP msgid:1
20230510 152803.890 580 PeriodicPublisher -> PUBLISH msgid: 2 qos: 2 retained: 0 rc 0 payload len(13): B 152803.875
20230510 152803.891 580 PeriodicPublisher <- PUBREC msgid: 2
20230510 152803.891 580 PeriodicPublisher -> PUBREL msgid: 2 (0)
20230510 152803.891 580 PeriodicPublisher <- PUBCOMP msgid:2
20230510 152804.089 580 PeriodicPublisher -> PUBLISH msgid: 3 qos: 2 retained: 0 rc 0 payload len(13): A 152804.089
20230510 152804.090 580 PeriodicPublisher <- PUBREC msgid: 3
20230510 152804.090 580 PeriodicPublisher -> PUBREL msgid: 3 (0)
20230510 152804.090 580 PeriodicPublisher <- PUBCOMP msgid:3
20230510 152804.104 580 PeriodicPublisher -> PUBLISH msgid: 4 qos: 2 retained: 0 rc 0 payload len(13): B 152804.089
20230510 152804.104 580 PeriodicPublisher <- PUBREC msgid: 4
20230510 152804.105 580 PeriodicPublisher -> PUBREL msgid: 4 (0)
20230510 152804.105 580 PeriodicPublisher <- PUBCOMP msgid:4
20230510 152804.303 580 PeriodicPublisher -> PUBLISH msgid: 5 qos: 2 retained: 0 rc 0 payload len(13): A 152804.302
20230510 152804.303 580 PeriodicPublisher <- PUBREC msgid: 5
20230510 152804.304 580 PeriodicPublisher -> PUBREL msgid: 5 (0)
20230510 152804.304 580 PeriodicPublisher <- PUBCOMP msgid:5
20230510 152804.317 580 PeriodicPublisher -> PUBLISH msgid: 6 qos: 2 retained: 0 rc 0 payload len(13): B 152804.302
20230510 152804.317 580 PeriodicPublisher <- PUBREC msgid: 6
20230510 152804.318 580 PeriodicPublisher -> PUBREL msgid: 6 (0)
20230510 152804.318 580 PeriodicPublisher <- PUBCOMP msgid:6
Faaux commented 1 month ago

I see very similar behaviour. Weirdly enough I get a "random" ~50ms per PUBREC on QoS2. No in_flight changes on my end though.

Protocol: MQTTv5 Lib-Version: 1.3.9 mosquitto: 2.0.11 (Default Config)

Please note the following two line in the log.

20240823 102351.618 (4286572096)    (3)< WebSocket_getch:633 (0)
20240823 102351.666 (4286572096)    (3)> MQTTPacket_decode:333

these correlate to the sources in MQTTPacket_Factory.c:118ff

/* read the packet data from the socket */
*error = WebSocket_getch(net, &header.byte);
if (*error != TCPSOCKET_COMPLETE)   /* first byte is the header byte */
    goto exit; /* packet not read, *error indicates whether SOCKET_ERROR occurred */

/* now read the remaining length, so we know how much more to read */
if ((*error = MQTTPacket_decode(net, &remaining_length)) != TCPSOCKET_COMPLETE)
    goto exit; /* packet not read, *error indicates whether SOCKET_ERROR occurred */

This delay is consistent in all -> PUBLISH <- PUBREC scenarios in my log!

Log

20240823 102351.618 7 TestingClientID -> PUBLISH msgid: 7 qos: 2 retained: 0 rc 0 payload len(176): {"discriminator":"ab
20240823 102351.618 Freeing 16 bytes in heap at file /home/bob/external/_source/060_paho-mqtt-c/src/MQTTPacket.c line 910, heap use now 2976 bytes

20240823 102351.618 (4294964800)     (4)< MQTTPacket_send_publish:912 (0)
20240823 102351.618 (4294964800)    (3)< MQTTProtocol_startPublishCommon:149 (0)
20240823 102351.618 (4294964800)   (2)< MQTTProtocol_startPublish:184 (0)
20240823 102351.618 Freeing 80 bytes in heap at file /home/bob/external/_source/060_paho-mqtt-c/src/MQTTAsyncUtils.c line 1450, heap use now 2960 bytes

20240823 102351.618 Allocating 32 bytes in heap at file /home/bob/external/_source/060_paho-mqtt-c/src/LinkedList.c line 92 ptr 0x7f74e40017a0

20240823 102351.618 (4294964800)  (1)< MQTTAsync_processCommand:1571 (1)
20240823 102351.618 (4294964800)  (1)> Thread_wait_cond:414
20240823 102351.618 Return code 1 from read select
20240823 102351.618 (4286572096)    (3)> Socket_continueWrites:995
20240823 102351.618 (4286572096)    (3)< Socket_continueWrites:1025 (0)
20240823 102351.618 Return code 1 from write select
20240823 102351.618 (4286572096)    (3)> isReady:226
20240823 102351.618 (4286572096)    (3)< isReady:231 (1)
20240823 102351.618 (4286572096)   (2)< Socket_getReadySocket:328 (7)
20240823 102351.618 m->c->connect_state = 0
20240823 102351.618 (4286572096)   (2)> MQTTPacket_Factory:112
20240823 102351.618 (4286572096)    (3)> WebSocket_getch:596
20240823 102351.618 (4286572096)     (4)> Socket_getch:343
20240823 102351.618 (4286572096)      (5)> SocketBuffer_getQueuedChar:223
20240823 102351.618 (4286572096)      (5)< SocketBuffer_getQueuedChar:242 (-22)
20240823 102351.618 (4286572096)      (5)> SocketBuffer_queueChar:308
20240823 102351.618 queueChar: index is now 1, headerlen 1
20240823 102351.618 (4286572096)      (5)< SocketBuffer_queueChar:333
20240823 102351.618 (4286572096)     (4)< Socket_getch:364 (0)
20240823 102351.618 (4286572096)    (3)< WebSocket_getch:633 (0)
20240823 102351.666 (4286572096)    (3)> MQTTPacket_decode:333
20240823 102351.666 (4286572096)     (4)> WebSocket_getch:596
20240823 102351.666 (4286572096)      (5)> Socket_getch:343
20240823 102351.666 (4286572096)       (6)> SocketBuffer_getQueuedChar:223
20240823 102351.666 (4286572096)       (6)< SocketBuffer_getQueuedChar:242 (-22)
20240823 102351.666 (4286572096)       (6)> SocketBuffer_queueChar:308
20240823 102351.666 queueChar: index is now 2, headerlen 2
20240823 102351.666 (4286572096)       (6)< SocketBuffer_queueChar:333
20240823 102351.666 (4286572096)      (5)< Socket_getch:364 (0)
20240823 102351.666 (4286572096)     (4)< WebSocket_getch:633 (0)
20240823 102351.666 (4286572096)    (3)< MQTTPacket_decode:349 (0)
20240823 102351.666 (4286572096)    (3)> WebSocket_getdata:677
20240823 102351.666 (4286572096)     (4)> Socket_getdata:381
20240823 102351.666 (4286572096)      (5)> SocketBuffer_getQueuedData:180
20240823 102351.666 (4286572096)      (5)< SocketBuffer_getQueuedData:208
20240823 102351.666 (4286572096)      (5)> SocketBuffer_complete:283
20240823 102351.666 (4286572096)      (5)< SocketBuffer_complete:293
20240823 102351.666 (4286572096)     (4)< Socket_getdata:415
20240823 102351.666 (4286572096)    (3)< WebSocket_getdata:756 (-536626872)
20240823 102351.666 (4286572096)    (3)> MQTTPacket_ack:797
20240823 102351.666 Allocating 48 bytes in heap at file /home/bob/external/_source/060_paho-mqtt-c/src/MQTTPacket.c line 798 ptr 0x7f74d8000ec0

20240823 102351.666 (4286572096)    (3)< MQTTPacket_ack:836
20240823 102351.666 (4286572096)   (2)< MQTTPacket_Factory:176 (0)
20240823 102351.666 (4286572096)   (2)> MQTTProperties_copy:433
20240823 102351.666 (4286572096)   (2)< MQTTProperties_copy:442
20240823 102351.666 (4286572096)   (2)> MQTTProtocol_handlePubrecs:463
20240823 102351.666 7 TestingClientID <- PUBREC msgid: 7