eclipse / paho.golang

Go libraries
Other
330 stars 92 forks source link

Manual Ack using autopaho #185

Open abemedia opened 10 months ago

abemedia commented 10 months ago

Right now it isn't possible to manually ack messages using autopaho.

My application processes messages and only acks after processing was successful. This works fine with normal paho, however when trying to migrate to autopaho I realised there is no way to access the current paho client.

Are there any plans to make ack (or access to current client) available in autopaho?

MattBrittan commented 10 months ago

See issue #160 for a discussion of some potential issues around manual ACK's. This is not something I use myself so I'm interested in thoughts from those who do.

MattBrittan commented 7 months ago

Adding a note here because I've hit a somewhat linked isssue.

In the v3 client when order matters is false the process on message receipt is effectively:

go func() {
    handler(msg)
    if !manualACK {msg.Ack())
}

Currently with the v5 client it's not possible to replicate this. The handlers are called and then the message is acknowledged; this means that long running handlers cause parts of the library to stall (so if a handler takes 31s and keepalive is 30 then the connection will be dropped).

However I believe that recent changes also provide a solution to this issue. Routers are now optional, having been replaced with OnPublishReceived []func(PublishReceived) (bool, error) where PublishReceived is:

PublishReceived struct {
        Packet *Publish
        Client *Client // The Client that received the message (note that the connection may have been lost post-receipt)

        AlreadyHandled bool    // Set to true if a previous callback has returned true (indicating some action has already been taken re the message)
        Errs           []error // Errors returned by previous handlers (if any).
}

This means that the handler has access to the Client that received the message and can use that to call Ack. The Ack will fail if the connection has dropped/been re-established meaning there is a chance of double-ups at QOS2 (to avoid that I think we would need to provide a way to delay the reconnection until all handlers have completed processing (this would require some thought and is really only an issue at QOS 2 so will probably not impact too many users).