Open ziggie1984 opened 1 month ago
I think the question is, should we ever abort the payment lifecycle even when there are inflight HTLCs? The answer is yes, but only if we get a non-MPP-related error, such as a DB-related error, or other lower-level critical errors.
The previous assumption is, we should only rely on this func to decide if we want to abort the loop, https://github.com/lightningnetwork/lnd/blob/ab96de3f074e008a019e731701c9cca934fa1b11/routing/payment_lifecycle.go#L233-L237
which works most of the time given the setup that,
RequestRoute
is fast (in million seconds)but it can break if,
decideNextStep
before the payment is marked as failed, and proceed to register another attempt.RegisterAttempt
, now it fails as we cannot register attempts on a failed payment.The root cause is we have two different views of the payment in two goroutines,
resumePayment
, we always fetch the latest payment state for each iteration, and make decisions based on this state.collectResultAsync
, we will alter the payment state based on the collected result from htlcswitch
.The simple fix is to always read and write to the DB in the same goroutine, which should happen in resumePayment
. Since the only slow action is to collect results from htlcswitch
via,
https://github.com/lightningnetwork/lnd/blob/ab96de3f074e008a019e731701c9cca934fa1b11/routing/payment_lifecycle.go#L509-L511
A better fix is to refactor deeper to make the DB actions clearer, which I think can be accomplished in the upcoming native payment SQL PRs.
I think it makes sense to store the HTLC Attempts in Memory during the excecution of the payment so that the payment lifecycle can always be aware how many HTLCs have been sent ?
The more we rely on ephemeral states the less fault tolerant it becomes. Suppose a power outage happens all the results will be lost - unless we can reconstruct them, say, if htlcswitch
has stored it on disk.
I think the question is, should we ever abort the payment lifecycle even when there are inflight HTLCs? The answer is yes, but only if we get a non-MPP-related error, such as a DB-related error, or other lower-level critical errors.
Hmm I tend to rather make sure we resolve all INFLIGHT HTLCs even if we have a lower-level critical error so that we make sure we mark this Payment and all its HTLCs as somehow unrecoverable, so that at least the trackpayment
cmd signals the exit of the payment correctly.
but it can break if,
one or more HTLCs are in flight. the sender node shuts down before the invoice times out and restarts after the invoice times out. we call decideNextStep before the payment is marked as failed, and proceed to register another attempt. we call RegisterAttempt, now it fails as we cannot register attempts on a failed payment.
Do you mean those steps all must happen in sequential order for the payment lifecycle to break or does every point represents a case where the payment lifecycle fails ? I guess it's the former but I don't think Point 2 is necessary for it to break ?
The simple fix is to always read and write to the DB in the same goroutine, which should happen in resumePayment. Since the only slow action is to collect results from htlcswitch via,
Depending on when the Payment Sql refactor is planned for I would prefer going with a simple fix first and then proceeding. Your recommendation of writing to the db in resumepayment
is definitely a good way to solve it and its seems not super complicated.
While looking through the code I am wondering why we mutate the state of the payment in a fetchPayment method:
I think we should not change the state in a fetch method or we should call this fetch
method update
wdyt ?
This keeps happening for us on a regular basis on mainnet. There is no way to prevent it apart from not using MPP and even that is just an educated guess.
Remarked in https://github.com/lightningnetwork/lnd/pull/8174
Analysed some of the behavior here:
https://github.com/lightningnetwork/lnd/pull/8174#issuecomment-2264125375
What I think might be taken into consideration to change (from my point of understanding the codebase here)
When sending a payment LND will by default attempt MPP payment up to 16 shards per payment. We launch all of the asynchronously however when during the process one of the HTLC attempts fails because of specific reasons e.g. requesting a route fails or we cannot register a payment attempt:
https://github.com/lightningnetwork/lnd/blob/7e1e0545113d7361ed1651e01003c9d30dcc42c9/routing/payment_lifecycle.go#L258-L280
we abort the payment cycle and don't wait for INFLIGHT HTLC attempts to be resolved, we will only reconsider them during a restart.
One of the examples described in https://github.com/lightningnetwork/lnd/pull/8174#issuecomment-2264125375 is that as soon as the MPP times-out at the receiver side, but we are in the process of sending another HTLC, we will fail the payment lifecycle without resolving all the INFLIGHT htlcs.
This has downsides:
On the one hand the payment will always be inflight and service providers using the API might not be aware that this payment has no chance of resolving successfully because the MPPTimeout has already been reached for example.
Moreover when described to the
Trackpayment
API the payment stays INFLIGHT/PENDING forever.Possible solutions, interested in feedback here:
I think it makes sense to store the HTLC Attempts in Memory during the excecution of the payment so that the payment lifecycle can always be aware how many HTLCs have been sent ? I store them in the DB but I think it might be worthwhile to also have the easy accessible in life cycle struct maybe ? (Currently we only have 1 golang channel which listens to potential results of all the result collectors)