waku-org / research

Waku Protocol Research
MIT License
3 stars 0 forks source link

Estimate the maximum acceptable latency #54

Open s-tikhomirov opened 7 months ago

s-tikhomirov commented 7 months ago

In a client-server interaction in Store, how fast should the overall protocol be? There is a trade-off between establishing a more "ideal" market (i.e., elaborate pricing mechanisms, results cross-checking) vs getting something fast.

From a PR discussion, @jm-clius (excerpt, emphasis mine):

Many store clients perform several queries per second and waiting for a transaction every time will not be feasible. To a lesser extent, even just the offer-response mechanism may introduce unacceptable bandwidth overhead and latencies when querying at a high rate. I think we'll need some prepaid mechanism that allows the client to perform simple request-responses with proof of payment

My suggestion from the same discussion:

The following may be reasonable for MPV:

  1. the client and the server negotiate the price;
  2. the client pays and gets assigned some prepaid balance;
  3. the client does multiple requests (while the server keeps track of its balance).

The result of this investigation may inform our decisions, among other things, on price negotiation protocol (#52) and proof-of-payment methods / service credentials / subscription model (#??).

jm-clius commented 7 months ago

Some form of prepaid model probably makes sense. I realise you asked about the several-queries-per-second use case elsewhere and I forgot to respond - apologies.

A typical use case is a client running an application with multiple channels/content topics, trying to retrieve "all history" after a significant period of being offline. Such a client will run multiple queries (some in parallel, some sequentially): to retrieve history for each channel (single requests are limited to 10 content topics per query), to retrieve subsequent pages of history sequentially. Even if we rate limit queries here, the average use case is such that only something like a prepaid model would make sense in the long term.