interledger / rfcs

Specifications for Interledger and related protocols
https://interledger.org
Other
427 stars 106 forks source link

Support bulk-fulfill? #298

Closed michielbdejong closed 6 years ago

michielbdejong commented 6 years ago

Something I thought of when we were talking about streaming payments yesterday: if Alice wants to pay Bob, and they have real-time two-way out-of-band communication open, Bob can "loop back" the conditions of all incoming "payment chunks" to Alice. She can check whether all of them arrived, and then atomically share all the fulfillments with Bob. This would be a transport protocol, like PSK.

Situation in which Chunk-Loop Fulfillment (CLF) payments are useful:

Rough description of the CLF procedure:

dappelt commented 6 years ago

I think that is an interesting idea to a relevant problem.

She wants one of two things to happen within a hard timeout limit: total-success, or total-failure

The described protocol might make partial-payments less likely than with PSK, but it does not guarantee total-success or total-failure. What it does do is letting the sender know that each chunk of the payment made it to the destination ledger and was at some point on hold. There is no guarantee that once the sender hands over the fulfillments to the recipient that the payments are still on hold. Furthermore, even if the sender gives out all fulfillments atomically, it does not guarantee that the recipient passes on all the fulfillments to the destination ledger. For example, the recipient might crash halfway through submitting the fulfillments to the ledger or there is a network outage etc.

In a micropayment scenario, the sender has limited payment bandwidth and, hence, the individual micropayments don't arrive at the same time. Therefore, another challenge for your proposed protocol is that you need to align the timeout of the individual payment chunks, such that the earliest arrived chunk is still on hold once the last chunk arrives.

Nevertheless, I think the idea is all-in-all very interesting and we should investigate some more. I think we still need some kind of "rollback/retry protocol" in case of partial fulfillment. With your proposed protocol partial fulfillments might happen less often, so it is a good start.

michielbdejong commented 6 years ago

Right, one thing that is missing is a timeout for Alice sending the secret. Another annoying thing is that Bob now assumes a fulfillment risk.

One way to solve this in practice would be if the destination ledger would support 'bulk-fulfill'.

In that case, it's better if Bob generates the secret, and tells Alice which conditions to use without telling her the secret.

Once all chunks are prepared on the destination ledger, Bob does a bulk-fulfill, revealing his secret for the first time, and this atomically updates Bob's account balance. If at least one of Alice's source payments is fulfilled, she can show Bob the fulfillment, and claim she has paid the full amount. It's possible that Alice pays less than intended, but that risk stays with the connectors.

emschwartz commented 6 years ago

It's an interesting idea but I don't think it actually solves a problem. The two cases this sounds like it could relate to are:

  1. If no single path through the center of the network has enough liquidity to support the payment Alice wants to make but multiple paths originating from her do
  2. Alice doesn't have enough bandwidth with her connector to send the full payment she wants to send.

I think 1 is very unlikely because you would always expect the center of the network to have more liquidity than the edges. And this doesn't solve 2 because even if you split the payment up into multiple chunks, the amount the connector will care about is the total money in flight. If you don't have enough bandwidth for one large payment in flight, they won't care if you split it up into multiple.

I think the choice between streaming payments and non-streaming payments is starker: either you do streaming, in which case each payment is fulfilled separately and you accept the risk of partial payments or you send everything in one payment and hope there's enough bandwidth to support it.

michielbdejong commented 6 years ago

If you don't have enough bandwidth for one large payment in flight, they won't care if you split it up into multiple.

Good point, that's possible, but see below.

I think 1 is very unlikely

I thought we discussed that there will always be payments whose amount is too big (even if it's just the ones above 100 million dollars)?

You're right that in a "center and periphery" system, it's likely the periphery has lower maximum amounts than the center. But what if Alice has more than one connector, or Alice's connector has more than one options for the second hop?

michielbdejong commented 6 years ago

Updated the title, to represent current idea:

By doing an atomic bulk-fulfill, Bob tells the destination ledger to either fulfill all transactions, or none. This will not help with the situation where a single path has insufficient liquidity, but it will be useful if there are multiple paths, each of which is too narrow, but which together have enough liquidity bandwidth for a large payment.

sharafian commented 6 years ago

atomic bulk-fulfill

This sounds a lot like the idea for dependent transfers that @mDuo13 and @sentientwaffle explored. I think the final consensus was that it's really hard to specify or implement because of how many edge cases can arise.

michielbdejong commented 6 years ago

Link?

michielbdejong commented 6 years ago

I would simply implement it as:

Fulfill ::= SEQUENCE {
  entries: SEQUENCE OF SEQUENCE {
    transferId UInt128,
    fulfillment UInt256,
  }
  --
  protocolData ProtocolData
}

It would result in a Response if all transferId's can be fulfilled, and Error if >=1 of them cannot. The ledger should be able to decide this just by looking at the entries, right?

The current 'normal' fulfill would then just be a 'bulk' fulfill with exactly one entry.

Not saying we should implement this, just that it would be an option if we see payments hitting liquidity limits inside the network. If that never happens in practice, then we don't need this and we can keep it simple.

mDuo13 commented 6 years ago

What we discussed technically wasn't bulk-fulfill, but rather "dependent transfers", transfers in a ledger whose execution depends solely on the successful outcome of other transfers. Some of the edge cases that puzzled us included things like loop detection and whether preparing dependent transfers had to track the composite state of all the transfers they depended on, how to ensure multiple dependent transfers execute in a deterministic order, etc.

This is a little different, and may be more tractable. It certainly simplifies the problem space if the credited account is the same for all the transfers in the bulk object. The XRP Ledger already does something similar on trust lines, for example, if you specify the destination as the issuer—splitting up the balance across multiple paths through different counterparties to get the best rate with what liquidity is available. Rounding and monitoring can be a little tricky or unintuitive but for the most part it seems to work fine.

On the other hand, I worry that this system could be "fragile" or hard to implement correctly, and if implemented incorrectly so that the bulk set of transfers was not atomic, it could easily break the guarantee that the sender is only debited if the recipient is credited in full, because any one chunk being executed would have to reveal Bob's secret/fulfillment.

stale[bot] commented 6 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. \n\n If this issue is important, please feel free to bring it up on the next Interledger Community Group Call or in the Gitter chat.