emqx / NanoSDK

NanoSDK - MQTT 5.0-compliant SDK with QUIC support in NNG flavor
https://nanomq.io
MIT License
135 stars 42 forks source link

Subscribing to $SYS/brokers/connected and $SYS/brokers/disconnected #229

Closed pmf closed 6 months ago

pmf commented 6 months ago

Describe the bug Using commit e5516c1b606fa62b83b67758b58ec60a78c16e05 of NanoSDK, I cannot get subscriptions to topics $SYS/brokers/connected and $SYS/brokers/disconnected of a nanomq broker to work for the mqttv5_client demo (or a custom client based upon it). Neither via explicit subscriptions to these topics nor via subscription to '#'.

Subscription reason code for these topics is successful (returning supported QoS of 2), as I interpret it.

When using clients like MQTT Explorer or MQTTX, I successfully receive messages for the $SYS/brokers/connected and $SYS/brokers/disconnected topics when another client on this broker connects/disconnects on the same broker instance (whether I subscribe directly to these topics or to '#').

When starting the other client (MQTT Explorer or MQTTX) first and then starting/killing mqttv5_client, I see connection/disconnection events for mqttv5_client, but I never seen other clients connecting when mqttv5_client is running and subscribed, so I'm assuming nanomq is correctly publishing to $SYS/brokers/connected and $SYS/brokers/disconnected.

Expected behavior When subscribing explicitly to $SYS/brokers/connected and $SYS/brokers/disconnected, I receive the respoective messages that the broker is successfully publishing to other clients (MQTT Explorer, MQTTX).

Actual Behavior

./mqttv5_client sub mqtt-tcp://127.0.0.1:1883 0 "\$SYS/brokers/connected"
# receiving nothing when connecting/disconnecting other client

./mqttv5_client sub mqtt-tcp://127.0.0.1:1883 0 "#"
# receiving nothing when connecting/disconnecting other client
# receiving stuff published to topic 'foo' from that client

To Reproduce See under 'Actual Behavior'

Environment Details

Additional context n/a

pmf commented 6 months ago

This (embarrassingly enough) has also been fixed by increasing my max. message size (I have been using the default of 120 bytes from the example).