paritytech / polkadot-sdk

The Parity Polkadot Blockchain SDK
https://polkadot.network/
1.78k stars 637 forks source link

Instant on-demand orders #1312

Open eskimor opened 1 year ago

eskimor commented 1 year ago

Longer term, the major selling proposition of on-demand would be low latency. With the current implementation, we process inherents first and then extrinsics. This means at the time of scheduling we are not yet aware of any orders contained in the current block and thus scheduling of them will only happen on the next block. We therefore have at least 6s latency for processing an on-demand order, even if cores are empty.

The actual scheduling happens during processing of the paras inherent. We would need to move the scheduling to after we have processed extrinsics, so we can take fresh orders into account immediately.

bkchr commented 1 year ago

Longer term, the major selling proposition of on-demand would be low latency.

Is this really the major selling point? I thought we do the scheduling a little bit ahead of the time. You can not be really sure any way when your transaction hits the relay chain. The one sending the PoV would also then just be aware that they can send the PoV after they have included the parachain block.

I would have thought you send out your buy request, then import the relay chain block that tells you that you got a slot in X relay chain blocks, then you can send out your PoV to the validators and let them already verify or at least buffer your PoV so that they have it when it is your slot.

eskimor commented 7 months ago

No planned for now.

eskimor commented 5 months ago

Re-opened. I think the major use case for on-demand is reducing latency for users, without having to have a bulk core every 6s. Thus making the latency as low as possible will make for a good user experience on Polkadot.

bkchr commented 5 months ago

I think the major use case for on-demand is reducing latency for users,

The major use case is that you pay as you need and not paying for blockspace that you are not using.

burdges commented 5 months ago

As we do whole blocks, we do not expect single incidents to trigger blocks, more a collection of transactions from many users. If your parachain makes a block roughly every 10 minutes, then you could tell users that some activities take time.

If you need low latency but have low throughput, then you could partner with other similar chains, and deploy a single chain together.

eskimor commented 5 months ago

If you need low latency but have low throughput, then you could partner with other similar chains, and deploy a single chain together.

This would also be an even better option (and indeed I keep suggesting this), but not always applicable, e.g. because then suddenly you need to trust the code of others.

With fast on-demand, even if you in the worst case build a block just to get low latency for the one user currently online, this is way better than what we are doing now: Build a block every 6s just in case a user comes along.

A parachain can offer their users options on latency and fees: E.g. "instant" confirmation 0.X DOT (user basically pays for the full block), medium fast confirmation 0.Y DOT (waits a bit for other transactions), slow confirmation 0.Z DOT (waits for next normal bulk schedule, if there is any).

The major use case is that you pay as you need and not paying for blockspace that you are not using.

Yes, but if then you can also get super fast confirmation time, like the grown ups, that's amazing UX.

Even for elastic scaling, UX will be better if the reaction to a demand spike is fast - best case, users don't even notice, because confirmation was as fast as always.

burdges commented 5 months ago

Also, parachains could lower confirmation times by doing their own concensus before backing. An equivocation there could double spend or whatever, but if collator equivocations get slashed, and tx values stay low, then this could yield sub-second confirmation times from the parachain itself.

bkchr commented 5 months ago

Also, parachains could lower confirmation times by doing their own concensus before backing.

Yeah this is something we will support in the near future