Closed nanomad closed 6 years ago
@nanomad
Periodic republishing of unacked QoS 1 publishes
AFAIK modern tcp stacks guarantees delivery for a given connection. If the congestion is more, the send buffer will be full at one point and keep alive ping will takeover to determine if the connection is alive and disconnects if not. The unacked packets will be republished in next session (for persistent sessions)
Proper handling of QoS 2 publishes
Yeah, this needs to be implemented but this wouldn't be my top priority :). Even popular services like aws iot, gcloud iot core and azure iot hub doesn't support qos2.
Disk persistence of QoS 1 and 2 publishes
I felt that this is adding complications to already not very straight forward tokio. Adding a persistence layer before rumqtt worked well for us. Maybe we can get consensus on this in future and decide.
Resub after disconnect
Broker remembers subscriptions for persistent sessions. Do you want to resubscribe for clean sessions?
@tekjar
AFAIK modern tcp stacks guarantees delivery for a given connection. If the congestion is more, the send buffer will be full at one point and keep alive ping will takeover to determine if the connection is alive and disconnects if not. The unacked packets will be republished in next session (for persistent sessions)
Good point, I suppose that with TCP transport this is a non-requirement (or at least a very low priority one). It would probably only protect against a mis-behaving broker that happens to forget about a publish we sent out.
I felt that this is adding complications to already not very straight forward tokio. Adding a persistence layer before rumqtt worked well for us. Maybe we can get consensus on this in future and decide.
I wouldn't do it in Tokio, I was thinking about persisting the broker state to disk (or part of it) and re-using that if clean_connection = false. I suppose it could an opt-in feature.
Broker remembers subscriptions for persistent sessions. Do you want to resubscribe for clean sessions?
This is more like the first point, it would protect against broker crashes as, AFAIK, a broker is allowed to drop the client state after a crash (or may drop).
Disk persistence and Periodic republishes are kind of non goals for this crate at the moment. For Ather's use case, we created a persistence layer before rumqtt. I'm closing this for now. Please reopen individual issues for remaining stuff
@tekjar I'd like to keep track of what is missing in the tokio2 branch in order to declare that as a "functioning" MQTT client
Did I miss anything else?