Closed michielbdejong closed 6 years ago
I think you're talking about for the trustline RPC protocol, but it may be worth doing this as well for the Ledger Plugin Interface.
Originally I wasn't a fan of this idea but it does make a difference that we've moved to thinking that transfers should all execute very fast.
I'm still not sold on this idea. I envision the connector as two separate components, the part that passes transfers forwards and the part that passes fulfillments backwards. The way the javascript connector works is like that; it simple processes notifications rather than thinking of forwarding and fulfilling a transfer as one operation. By tying send_transfer
and fulfill_condition
together, we're making assumptions about the connector architecture that aren't totally true for the current implementation.
I'm also concerned about tying the state of the communication to the state of the transfer. Does a dropped HTTP connection mean a rejection, or a cancellation? If it's a rejection, why does it not include a message, and if it's a cancellation what if it happens before the timeout? Can you pick a dropped connection back up to fulfill a transfer? What if the sending party closes their HTTP connection? What happens if the connection is re-established backwards?
I thought about your remarks, and think that in case of network trouble, payments should just fail. That's the most simplistic option. You may worry about the http connection being closed before the fulfill in my proposal, but I think that's not a bigger worry than not being able to establish the backward connection for the fulfill in the current situation.
One change we would need, which we didn't mention yet, is a separate 'settle' RPC call, because now the fulfillment-response is used for sending new payment channel claims / proof of on-ledger transfer / other types of balance settlement.
Another advantage of my proposal here is that the sender doesn't need to run an rpc endpoint. In the current situation, sending without having an rpc endpoint would rely on a second roundtrip, just after the expiry, to check what how the payment finalized. That's then two different flows, so more complex than my proposal.
It occurred to me that with this proposal, we could make a tiny additional change, and then only connectors and receivers, not senders, would need to run an rpc endpoint. That additional change is that we reverse get_limit and make it set_limit, so it goes from sender to receiver. So then the rpc calls would be:
Done (at least this idea made it into both the ilp2 experiment and the lpi2 experiment).
Now that we switched to request/response for
send_request
, we might as well do the same forsend_transfer
. It will make it easier to implement a sender (the alternative it to either run an rpc endpoint as a sender, or that we add a pro-activeget_fulfillment
method).Previously, we thought transfers would be slow on the scale of http calls, but the newer thinking is that transfer are first executed over trustlines, and then (batch-)settled over ledgers, so (especially in the case of remote quoting) there is no reason anymore to assume that the transfer call would take any longer than the quote request call.
We could also make this optional, so maybe distinguish one
send_transfer_and_return
and onesend_transfer_and_wait_for_transaction_finality
method