paritytech / json-rpc-interface-spec

30 stars 3 forks source link

archive/storageDiff: Suggestion to make storageDiff subscription-based #160

Open lexnv opened 1 month ago

lexnv commented 1 month ago

At the moment, storageDiff is a plain method.

To implement storageDiff of two arbitrary blocks, we'll have to iterate through the keys of both blocks and access the storage of those keys requested by the user.

This operation might be expensive enough to DoS the node, considering users might request differences between genesis <-> latest-finalized.

Another benefit of having a subscription-based approach for storageDiff is that we can leverage the backpressure system implemented for the chainHead subscription. On the other side, users will have to adapt their code to work with a subscription-based approach.

Implementation ref: https://github.com/paritytech/polkadot-sdk/pull/5997/files#diff-ac650a2e540d8d37576d05083cce78260192944382ecf1e68f0db50b5d86907eR350-R369

// cc @paritytech/subxt-team @tomaka @josepot Would love to get your thoughts on this 🙏

niklasad1 commented 1 month ago

I would also want to change archive_unstable_storage to be subscription-based and be similar to the chainHead_v1_storage.

The main benefits of it:

I think it would beneficial to make it subscription-based but I haven't followed discussion why it's not, WDYT?

tomaka commented 1 month ago

There are big differences between chainHead_storage and archive_storage that explain why chainHead_storage uses subscriptions and not archive_storage. In the case of chainHead, a light client needs to send multiple networking requests to full nodes in order to fulfill the request, and thus it is advantageous for latency to send notifications to the RPC client progressively rather than all at once. It is also important for the request to be cancelable because it might take a long time, and during this time the RPC client might no longer be interested in the operation. Light clients can't implement archive_storage by design, so it's not a problem.

None of these reasons are related to backpressure/DoS attacks, and chainHead_storage does not require/have any backpressure system other than what all JSON-RPC functions have. The chainHead_storage_continue function exists not for backpressure, but for the situation where a RPC client is no longer interested in an operation. Without storage_continue, in the time between when the client is no longer interested in an operation and when the server receives the chainHead_stopOperation the server might have sent several megabytes. storage_continue exists so that the client and server don't waste bandwidth.

In theory, archive_unstable_storage should be able to simply stream its result onto the TCP connections, and the back-pressure can be automatically handled. Think for example of a std::io::Read or AsyncRead trait implementation that progressively generates results on-the-fly when read or poll_read are called.

The implementation for the method response needs to have some kind of limit (or at least polkadot-sdk has that) to limit the number of storage queries to avoid DOS attempts.

This is already covered by the function:

The JSON-RPC server is encouraged to accept at least one `archive_unstable_storageDiff` method per JSON-RPC client. Trying to make more calls might lead to a JSON-RPC error when calling `archive_unstable_storageDiff`. The JSON-RPC server must return an error code if the server is overloaded and cannot accept new method call.

if one wants query plenty of storage values then the server may only reply with a handful, "x items were not processed" and the client would need to make several RPC calls to complete all them.

I don't see how this helps against DoS attacks. You suggest that the server sends a small amount of data then waits for the client to send back some kind of message before the server continue. This is exactly what happens on the TCP connection with ACK messages. There's no need to build a similar mechanism on top of what TCP/IP already does. The chainHead_storage_continue function does that, but it exists in order to save bandwidth when the client is cooperative rather than to protect against any kind of attack.

tomaka commented 1 month ago

The one aspect where turning archive_storage into a subscription would be advantageous is for head-of-line blocking problem. If the client asks for a huge number of keys, and the server sends back a huge response, during the time when the response is being downloaded no other request response can arrive. For example if the client wants to send a small system_name request, the response to this system_name will come after the response to archive_storage, which is potentially several seconds later.

But again this is just to help the client, and if the client experiences this problem it can simply decide by itself to send smaller queries in a row. There's no need to adjust the API of archive_storage.

niklasad1 commented 1 month ago

In theory, archive_unstable_storage should be able to simply stream its result onto the TCP connections, and the back-pressure can be automatically handled. Think for example of a std::io::Read or AsyncRead trait implementation that progressively generates results on-the-fly when read or poll_read are called.

I don't see how this helps against DoS attacks. You suggest that the server sends a small amount of data then waits for the client to send back some kind of message before the server continue. This is exactly what happens on the TCP connection with ACK messages. There's no need to build a similar mechanism on top of what TCP/IP already does. The chainHead_storage_continue function does that, but it exists in order to save bandwidth when the client is cooperative rather than to protect against any kind of attack.

Sure but when it comes archive_unstable_storage it's quite annoying the client could send "n big queries" (depends on the server's buffer size because no other limit for that AFAIU). For example if the server doesn't have a streaming JSON serializer i.e, would serialize the complete method response all those would have kept in memory until the complete message is sent or the TCP/IP connection is closed because that max retransmission limit is exceeded (such thing would be in order of minutes). This can also be prevented by having a limit for the number of storage queries in each call which I was hoping to avoid, max storage queries per call * buf_cap would be the limit then.

Am I understanding you correctly that you think is more of implementation thingy which would be achieve with a single method response?

It would certainly be easier to implement such a thing with a subscription instead by iterating and sending one a value at the time but bounding the number of storage queries in each call works as well for now.

tomaka commented 1 month ago

Am I understanding you correctly that you think is more of implementation thingy which would be achieve with a single method response?

Yes!

I don't think that the lack of streaming JSON serializer is a big problem. JSON is a very simple language, and you can simply generate JSON manually as strings. Send something like {"jsonrpc":"2.0", "id": ..., "response": (I'm writing this from the back of my head), then send the items one by one (properly serialized), then finish by writing }.

I realize that this might conflict with your existing abstraction layers (which I'm not familiar with anymore), but between having a complicated JSON streaming system or having a complicated notifications system for that RPC function, or simply sending strings, IMO simpler is better.

jsdw commented 1 month ago

Note: I'm only really thinking about archive_storageDiff here:

I realize that this might conflict with your existing abstraction layers (which I'm not familiar with anymore), but between having a complicated JSON streaming system or having a complicated notifications system for that RPC function, or simply sending strings, IMO simpler is better.

I guess this is the main point yeah. We could implement the method in a nice way by manually streaming it as you described, but that also requires punching through our JSON-RPC abstraction to the websocket implementation so that we can break the message into multiple frames (and avoid sending a length up front), which I suspect is very difficult for us to do.

And so, my argument for having it be subscription based would be that it's difficult for us to implement nicely, and (if we cared) I suspect it would be difficult for anybody else to do the same. If we instead use the tools available at the JSON-RPC abstraction layer (ie subscriptions) we'll avoid this difficulty (and also happen to have the nice side effect of avoiding head-of-line blocking, since you can't interleave anything other than opcodes with partial websocket frames).

Owing to this difficulty/complexity of implementation, we'll likely have to build up a response in memory and error if that grows too large, which is less nice for users (who have to then figure out how to break their requests up to avoid such errors) but also doable.

tomaka commented 2 weeks ago

If I summarize, we agree that ideally the JSON-RPC API would stay as it is, and the implementation would be modified, but in practice it is too difficult to do so.

But I'd like to raise another trade-off: is someone actually going to attack a JSON-RPC node by repeatedly sending large requests? As far as I'm aware no DoS attempt has ever happened to Parity, and I also have never heard of this happening to anyone. The archive nodes that are "important", for example the ones of Subscan/Polkascan/etc. are, as far as I'm aware, not publicly-accessible from the Internet, so this is irrelevant for them.

On the negative side, this modification to the JSON-RPC API is essentially a hack, and makes the life of the JSON-RPC client more complicated. On the positive side, it would prevent some DoS attacks. But if DoS attacks are in reality not really a concern, is it really worth doing this?

jsdw commented 2 weeks ago

If I summarize, we agree that ideally the JSON-RPC API would stay as it is, and the implementation would be modified, but in practice it is too difficult to do so.

My summary would be more like (again focusing only on archive_storageDiff here):

Pros of current API:

Pros of subscription based API:

On the positive side, it would prevent some DoS attacks. But if DoS attacks are in reality not really a concern, is it really worth doing this?

Even if they are not a reality, would it not be better to avoid such a potentially easy target?

Overall, I don't overly love having to complicate the API for users even a little bit, but in return we would also make it more likely to succeed (for eg large responses), more resilient (to eg DoS attacks) and more performant (than the non-difficult implementation at least, for users too). So I can't help but think that it is worthwhile overall.