core-wg / corrclar

Corrections and Clarifications to CoRE standards
Other
0 stars 0 forks source link

Blockwise: Going up #23

Open chrysn opened 2 years ago

chrysn commented 2 years ago

As I understand 7959, the block size can not go up, only down, even on Block2.

I think it would be convenient to allow the block size to go up -- some implementations already tolerate that, and in situations when the reason for sending little data changed (for example, because a small Block2 was sent for amplification mitigation), going up makes a lot of sense.

Of course, the size can only be incremented by 1 for every 0 at the end of the binary representation of the block number.

(CC'ing @mcr because he mentioned this in the current interim; here it also came up in OSCORE blockwise transfer where a situation that influences the choice of the inner block size changes).

cabo commented 2 years ago

Of course, the size can only be incremented by 1 for every 0 at the end of the binary representation of the block number.

If you go from 64 to 1024, you might as well restart the whole thing, and that also solves the going-up problem :-)

chrysn commented 2 years ago

Yes, in that range it makes no sense. But if things just don't fit by a little in an early request (as is the case with the EDHOC+OSCORE inner blockwise @rikard-sics showed today), it'd only be about going up one step.

Thing is: it's something that probably every server that supports going smaller (which all should) already supports unless it has an explicit check, so why rule out these cases?

cabo commented 2 years ago

why rule out these cases?

The objective was to enable either side to push down the size. If the other side then starts going up again, this may not terminate. So going up should be tied to a specific change in the channel, which we didn't have as an example in 2013.

boaks commented 2 years ago

I don't understand, why a server choose first a (too) small value and then will relax that to a larger value. Why, if the larger value is also OK, does the server not already go for the larger value with the first exchange? If OSCORE makes the difference, then this is know at the first exchange as well and the server can already choose a larger value. Sometimes I setup also different blocksizes for plain (coap) and dtls (coaps) and that works without fiddling.

chrysn commented 2 years ago

The objective was to enable either side to push down the size. If the other side then starts going up again, this may not terminate. So going up should be tied to a specific change in the channel, which we didn't have as an example in 2013.

This wouldn't change either side's ability to limit the size.

In Block2, the server could still send the smaller one (as it does with the first request). In Block1, the server might (at the even blocks) its readiness to receive larger messages by stating the larger size in control use. It still stays at the smaller of what the peers want to use, but either can offer the other to receive something larger (without forcing the other to accept that).

why the a server choose first a

This is not about the general block size choice (where DTLS and UDP might use different values), this is about when there is a particular reason to pick a different size for a single message, for example

mcr commented 2 years ago

If a 1024 byte block maximum has been advertised, is a sender allowed to send 512 byte blocks instead?

boaks commented 2 years ago

I still have a gap in understanding.

"large NoCacheKey option" for CoAP itself, options are not considered to be the payload, the blocksize applies to,or? If it's only about OSCORE, where such an option may turn into encrypted payload, then it a feature for OSCORE only.

"large in the envelop". I have to consider, I'm not sure, what you envelope is considered. I guess again, it's more for OSCORE, when a option is converted into encrypted payload.

"leisure to send 128 byte in the first response due to amplification mitigation" my understanding of such attacks is, that it is required to limit this for all blocks. Except some other advanced mechanism offers other protection.

boaks commented 2 years ago

If a 1024 byte block maximum has been advertised, is a sender allowed to send 512 byte blocks instead?

That depends on what "advertised" means. Proposed by the client or agreed by the server. And for block2, if a "stateless blockwise" implementation is used, the client may always propose a new blocksize, and the server may always reduce that blocksize returned, even if a larger one was agreed before.

There was discussion on the e-mail list about that (Carsten, Jon, Simon and me). If it's important for you, I will try to provide a link.

Edit: discussion on the mailing-list

chrysn commented 2 years ago

"large NoCacheKey option"

The option is not part of the block, but it's diminishing the size left in the MTU (or unfragmented size) for the block.

"large in the envelop"

I'd losely define the envelope as everything between the MTU and the sum of options and payload -- could be the token, could be large IP options if that's a thing; OSCORE is not special here, it just also has places that are "envelope" here.

"leisure to send 128 byte in the first response due to amplification mitigation"

The limits can go away as the mechanisms are run; Echo can run during block2 0 and 1.

boaks commented 2 years ago

About the first two:

The basic idea seems to be, that if other parts of the message eats up too much of the MTU and therefore the left payload requires a small block size, then that block size could be enlarged later, when less other parts are used. Is that the assumption?

boaks commented 2 years ago

I'm still not sure, if an adaption is generally required.

Somehow it looks more, that some new extensions may require this, but generally, it's hard to see, if that is really the case.

Let me try therefore to sum-up the already cited e-mail exchange about block-size changes:

At least, that's my understanding of the e-mail exchange in the list.

With that, if only the first request really requires a small block-size, the client may just send the second request using the same small one, and in the third request the client may use a larger one (doubled). If that works and is agreed by the server depends on the constraints there. AFAIK, that is compliant to RFC7959 and mainly requires that the implementation are aware of that. Implementations sticking e.g. to the block-num to match the block in request -response, will fail. It requires to use the block-offset := block-num * block-size.

mcr commented 2 years ago

I still have a gap in understanding.

Changes in routing could result in changes to optimal MTU. If many blocks do not get through, trying a smaller size is a good idea. If it works, then increase the size (binary search perhaps) until a new optimum is found.

boaks commented 2 years ago

As I tried to explain:

The client is able to try a larger or smaller blocksize. Only the server is not allowed to send a larger blocksize as requested by the client. Changing the server's blocksize selection will come with some security concerns.

So: Is the intention, that the server sends larger responses then requested by the client?

chrysn commented 2 years ago

No worries about the server sending something larger -- that's not the intention of this.

But the RFC says that once the client has received something small, it can't request something larger again (even if it'd obviously need to be prepared for the server to send something equally small again).

The precise text or 7959 is:

For Block2, if the request suggested a larger value of SZX, the next request MUST move SZX down to the size given in the response.

Taken very literally this only makes a statement about the single next request (so if you request 1k and the server sends 128 byte, you must request 128 byte, but on the next you might already ask for 256 byte because in the second block we requested and got 128 byte, and thus the "if" doesn't match), so maybe we're already good -- but I'd rather not rely on taking a line that literally when the intention seems to be "once it goes down, stay low".

boaks commented 2 years ago

I think, this was the discussion on the e-mail list.

The main statement for me there was: Carsten

A Stateless server may simply not know what it did before.

With that, even if that discussion was about going down, without state, there is no up nor down. At least, that's my conclusion. Maybe we're all on the same understanding. And it's more about "precise" the statement in RFC7959, which seems to be in contradiction of a intended "Stateless server ".

chrysn commented 2 years ago

Stateless servers are something that is allowed (and made easy), not forced. With all-stateless servers (or, really any servers that are implemented with simplicity in mind), things will just work. My concern is that someone takes the current wording to build a very strict ("We disallow anything that deviates from the narrowest interpretation of the spec, for it could be an attempt to hack our highly secure server!") server side implementation, and starts rejecting requests once a client suggests they could go "up" (which any server I'd implement wouldn't even ever notice).