Closed onrao closed 5 months ago
Clients are using the shared subscription
@onrao could you be even more specific? what is data loss? how do you determine that? how many shared subscribers, with what shared subscriber policy? have you tested with providing a ClientID vs no ClientID?
Hi ioolkos
Thank you for the reply, We would like to understand the behaviour of verneMQ when
2.We had this client application from 3rd party which is enabling the Shared subscription with policiy on topic filter "$shared/json//+/+/attrs"
When such two instances of the same client aplication is running Ideally because of the shared subscription by the clients ,VerneMQ should distribute based on the clients connected and session active conditions , Here we are seeing thedistribution of the subscribed topics properly inbetween we are loosing some published topics (10 to 20%)which are under this shared subscription topics.
Please find the docker configuration of VerneMQ is used
DOCKER_VERNEMQ_ACCEPT_EULA yes DOCKER_VERNEMQ_ALLOW_ANONYMOUS off DOCKER_VERNEMQ_listener.tcp.LISTENER.proxy_protocol on DOCKER_VERNEMQ_listener.tcp.proxy_protocol_use_cn_as_username on DOCKER_VERNEMQ_LOGCONSOLE off DOCKER_VERNEMQ_LOGCONSOLE__LEVEL debug DOCKER_VERNEMQ_MAX_INFLIGHT_MESSAGES 0 DOCKER_VERNEMQ_MAX_OFFLINE_MESSAGES -1 DOCKER_VERNEMQ_MAX_ONLINE_MESSAGES -1 DOCKER_VERNEMQ_persistent_client_expiration never DOCKER_VERNEMQ_plugins.vmq_passwd on DOCKER_VERNEMQ_USER_client1 client1 DOCKER_VERNEMQ_USER_client client DOCKER_VERNEMQ_OUTGOING_CLUSTERING_BUFFER_SIZE 100000
Closing this. For future readers: VerneMQ will assign a random ClientID for MQTT clients that connect without giving a ClientID. This is not suitable for shared subscriptions (and persistent sessions), which need stable ClientIDs.
👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq 👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.
To help us save time and help you faster
Before opening a new issue, please make sure that there isn't already an open (or closed/resolved) issue reporting the same problem. Please always open a new issue rather than posting to a closed one - but please reference the possibly related old issue.
Please do not insert images of text, but add the text instead.
Environment
VerneMQ Version:1.10.3
OS:AWS limed
Erlang/OTP version (if building from source):
VerneMQ configuration (vernemq.conf) or the changes from the default
Cluster size/standalone: standalone container
Expected behaviour
There shouldn't be a dataloss
Actual behaviour
Dataloss observed