SanKumar2015 / EST-coaps

EST over CoAPs IETF draft
1 stars 1 forks source link

Ben K.'s AD review (8/28/2019) #150

Closed csosto-pk closed 4 years ago

csosto-pk commented 5 years ago

Answer's to Ben's comments (including discussion on the ACE mailing list with @petervanderstok , Jim S. ):

Section 2

This document also profiles the use of EST to only support certificate-based client authentication. HTTP Basic or Digest authentication (as described in Section 3.2.3 of [RFC7030]) are not supported.

Is the intent just to exclude HTTP-layer authentication, or to specifically prefer client certificate authentication? 7030 does allow for non-certificate TLS-layer authentication (e.g., TLS-SRP), which would be compatible with DTLS just fine. There's also recurrent talk of getting modern PAKE(s) integrated with TLS, which might also be an option in the constrained space. [There are subsequent parts of the document that continue to assume only-certificates, so I'm mostly assuming that the intent is specifically to prefer client certificates, and have not made specific notes at those other places in the document.]

Addressed in the mailing list discussion with Ben.


DTLS 1.2 implementations must use the Supported Elliptic Curves and Supported Point Formats Extensions in [RFC8422]. Uncompressed point format must also be supported. DTLS 1.3 [I-D.ietf-tls-dtls13] implementations differ from DTLS 1.2 because they do not support point format negotiation in favor of a single point format for each curve. Thus, support for DTLS 1.3 does not mandate point format extensions and negotiation.

DTLS 1.3 uses the "supported_groups" extension in contrast to Supported Elliptic Curves for 1.2; we should mention that disparity as well.

Peter added in the text.


o a previously issued client certificate (e.g., an existing certificate issued by the EST CA); this could be a common case for simple re-enrollment of clients.

Is "re-enrollment" intended to cover renewal operations?

Yes.


o a previously installed certificate (e.g., manufacturer IDevID [ieee802.1ar] or a certificate issued by some other party); the server is expected to trust that certificate. IDevID's are

"trust" can cover a lot of things, many of which we don't really need here; would "expected to be able to validate" suffice?

Removed the unnecessary phrase the server is expected to trust that certificate


As described in Section 2.1 of [RFC5272] proof-of-identity refers to a value that can be used to prove that the private key corresponding to the public key is in the possession of and can be used by an end- entity or client. Additionally, channel-binding information can link

nit: "the certified public key", I think, since the certificate is what binds the identity to the public key. Also, this sentence is a bit awkward, though I don't have any concrete rewording suggestions at present.

Used the certified public key in the text.


Given that after a successful enrollment, it is more likely that a new EST transaction will take place after a significant amount of time, the DTLS connections SHOULD only be kept alive for EST messages that are relatively close to each other. In some cases, like NAT rebinding, keeping the state of a connection is not possible when devices sleep for extended periods of time. In such occasions, [I-D.ietf-tls-dtls-connection-id] negotiates a connection ID that can eliminate the need for new handshake and its additional cost.

Do we also want to mention DTLS 1.3 session resumption here as less expensive than a full handshake? It's not as cheap as "just keep using the same connection ID", of course, but has somewhat different other properties.

Peter fixed in the text.


Section 5

o Simple enroll and re-enroll for a CA to sign public client identity key.

nit(?): is this "public client identity key" or "client identity public key" or something else?

Fixed. client identity public key.


o Certificate Signing Request (CSR) attribute messages that inform the client of the fields to include in a CSR.

nit: "informs"

Fixed by Peter.


o Server-side key generation messages to provide a private client identity key when the client choses so.

(similar nit(?) as above)

client identity private key. Fixed.


Figure 5 in Section 3.2.2 of [RFC7030] enumerates the operations and corresponding paths which are supported by EST. Table 1 provides the mapping from the EST URI path to the shorter EST-coaps URI path.

Our table has the entries in a different order than 7030's table. We also don't say anything about the (lack of the) fullcmc endpoint. The serverkeygen endpoints could perhaps have some notation to indicate that the private key is always returned, in addition to the PKCS#7 vs. pkix-cert question that distinguishes skg and skc.

Addressed in text.


The /skg message is the EST /serverkeygen equivalent where the client requests for a certificate in PKCS#7 format and a private key. If

nit: s/requests for a/requests a/

Fixed in text.


Section 4

As per sections 3.3 and 4.4 of [RFC7925], the mandatory cipher suite

nit: I do see RFC 7925 in the subject heading, but the lead-in here is still a bit jarring. Without some statement in this document to that effect, RFC 7925 is not binding on the protocol specified in this document, so I think it's better to say something like "In accordance with", or even to flat out state that "this document conforms to the DTLS 1.2 profile specified in RFC 7925".

Fixed. Added the text.


It's a little unfortunate that we can only indicate ct=62 for the last two, and there's no way to indicate what content types we expect within that container.

Yes. We had extensive discussions in the WG about this. There is nothing else we could do due to the COAP spec.


The first three lines of the discovery response above MUST be returned if the server supports resource discovery. The last three

It may be worth listing out ace.est.crts, ace.est.sen, and ace.est.sren explicitly for clarity (especially since line breaks are "only for readability")

Peter fixed in text.


lines are only included if the corresponding EST functions are implemented. The Content-Formats in the response allow the client to request one that is supported by the server. These are the values that would be sent in the client request with an Accept option.

It seems that these specific values (or a subset thereof) are mandatory; a forward reference might be in order.


Section 5.1

With the current text, I expect us to get several IESG questions about "why do you have both discovery and well-known URIs?". I think we need to treat this more prominently in the first few paragraphs of the section, perhaps just after we discuss the short-est strings, so we could tie into the constrained nature of things and how some devices may need to hardcode assumptions about the endpoint location.

Right. This is now part of the text in Section 5.1


Section 5.2

While [RFC7030] permits a number of these functions to be used without authentication, this specification requires that the client MUST be authenticated for all functions.

Perhaps this divergence from 7030 should be noted more prominently, perhaps in the section title or a dedicated "Differences from RFC 7030" section?

We made sure this is clear in the protocol design section so readers can see it early in the document.


Section 5.2

Did we consider merging this table with Table 1?

Yes, but we though they would be clearer separate.


Section 5.3

EST-coaps is designed for low-resource devices and hence does not need to send Base64-encoded data. Simple binary is more efficient (30% smaller payload) and well supported by CoAP. Thus, the payload for a given Media-Type follows the ASN.1 structure of the Media-Type and is transported in binary format.

This last sentence is only true when scoped to this document, for which all the content we're handling is specified using ASN.1. I don't know whether we want to tweak the wording to reflect that or not, though. Also, we probably should say DER-encoded ASN.1 structure (or BER? I'd have to check what the requirements are) since ASN.1 is just the abstract syntax and not the encoding rules.

Peter fixed in text.


When the client makes an /skc request the certificate returned with the private key is a single X.509 certificate (not a PKCS#7 container) with Content-Format identifier TBD287 (0x011F) instead of

  1. In cases where the private key is encrypted with CMS (as explained in Section 5.8) the Content-Format identifier is 280 (0x0118) instead of 284. The key and certificate representations are ASN.1 encoded in binary format. An example is shown in Appendix A.3.

I think these relationships might be more clear in tabular form; I dind't really understand the scheme at this point in the document, and needed to get a ways further in before it really "clicked".

Addressed with a new table.


Section 5.4

o EST-coaps servers sometimes need to provide delayed responses which are conveyed with an empty ACK or an ACK containing response code 5.03 as explained in Section 5.7. Thus, it is RECOMMENDED

nit: the response itself (delayed or not) is not in the ACK, so maybe "the need for which is conveyed".

Peter rephrased to [...]which are preceded by an immediately returned empty ACK[...]


Section 5.6

layer. In addition, invokers residing on a 6LoWPAN over IEEE 802.15.4 [ieee802.15.4] network should attempt to size CoAP messages such that each DTLS record will fit within one or two IEEE 802.15.4 frames.

Is this intended to be a normative SHOULD? If not, it feels like we need a reference or justification.

Nah. We had explicit comments in the list to not put normative language the repeats normative language from other standards.


Section 5.7

certificate to the client after a short delay. If the certificate response is large, the server will need more than one Block2 blocks to transfer it.

nit: "Block2 block" singular

Peter fixed.


POST [2001:db8::2:1]:61616/est/sen (CON)(1:0/1/256) {CSR (frag# 1)} --> <-- (ACK) (1:0/1/256) (2.31 Continue)

Where is this notation documented? (Is Appendix B.1 of this document the first place it's introduced?) We need some kind of reference on first usage.

Peter added text to point to the appendix.


If the server is very slow (i.e. minutes) in providing the response (i.e. when a manual intervention is needed), he SHOULD respond with

nit: RFC style puts commas after (and before) "i.e." and "e.g.".

Peter fixed.


the identical CSR to the server. As long as the server responds with response code 5.03 (Service Unavailable) with a Max-Age Option, the client SHOULD keep resending the enrollment request until the server responds with the certificate or the client abandons for other reasons.

nit: the transitive verb "abandons" has no direct object

Peter fixed in text.


POST [2001:db8::2:1]:61616/est/sen(CON)(1:N1/0/256){CSR (frag# 1)}--> <-- (ACK) (1:0/1/256) (2.31 Continue) POST [2001:db8::2:1]:61616/est/sen (CON)(1:1/1/256) {CSR (frag# 2)} --> <-- (ACK) (1:1/1/256) (2.31 Continue)

The first line doesn't seem to match up -- doesn't the "N1" mean "fragment N1+1" but the descriptive text at the end say "frag# 1"? Or is this supposed to be 1:0/1/256?

Right. Addressed in the example.


response. Note that the server asks for a decrease in the block size when acknowledging the first Block2.

nit: From the trace, it looks like the server is the first one to use a 128-byte block size, and it does happen in an ack message, but that ack is not acking a block2 (though it contains one). (That ack message also happens to contain part of the response.)

Right. I updated the line to be more accurate <-- (ACK) (1:N1/0/256) (2:0/1/256) (2.04 Changed){Cert resp (frag# 1)}


Also, it surprised me somewhat that the client has to re-send the whole request (i.e., all fragments thereof) after the Max-Age interval when the server says it's ready, since that feels wasteful of bandwidth, but I assume that's just how CoAP works and not relevant for this document.

Nothing to address for this.


Section 5.8

In scenarios where it is desirable that the server generates the private key, server-side key generation should be used. Such

nit: suggest s/should be used/is available/ to avoid the appearance of tautology.

Peter fixed in text.


scenarios could be when it is considered more secure to generate at the server the long-lived random private key that identifies the client, or when the resources spent to generate a random private key at the client are considered scarce, or when the security policy requires that the certificate public and corresponding private keys are centrally generated and controlled. Of course, that does not

I can (grudgingly) accept that people are going to do server-side key generation, so I do not propose to remove it from the document. The policy case is a clear example of why the feature needs to be available, but I'm not 100% sure that I believe that server-keygen could be "more secure" given that the client needs to be able to produce secure random numbers for DTLS (though I do accept that some people will believe it to be so!). It seems likely to only be possible in some intermediate situation where the client-generated random numbers could be attacked but at substantial expense, such that paying that expense for a single handshake is "too much" for the attacker to bear, but doing it for the key generation that would give the attacker all transactions would make the expense worthwhile; this intermediate situation seems to also be transitory, since attacks only get better/cheaper. For the other case, if we were doing RSA keygen, then going from random numbers to prime generation could be enough incremental expense that offloading to the server would make sense, but I didn't think the elliptic curve stuff had the same kind of issues, so I'd like to hear more about the resource-consumption aspect as well.

This was discussed in WG mailing list.


When requesting server-side key generation, the client asks for the server or proxy to generate the private key and the certificate which are transferred back to the client in the server-side key generation response. In all respects, the server SHOULD treat the CSR as it would treat any enroll or re-enroll CSR; the only distinction here is that the server MUST ignore the public key values and signature in the CSR. These are included in the request only to allow re-use of existing codebases for generating and parsing such requests.

We need to reword this; the SHOULD is in conflict with the MUST.

Rephrased to In all respects, the server treats the CSR as it would treat any enroll or re-enroll CSR; the only distinction here is that the server MUST ignore the public key values


certificate and a private key. The private key Content-Format requested by the client is depicted in the PKCS#10 CSR request. If

nit: I suggest s/depicted/indicated/

Fixed by Peter.


(Section 5.3) .

The two representations (each consisting of two CBOR array items) do not have to be in a particular order since each

[side note: core-multipart-ct is looking to land on "multipart/mixed" semantics to resolve my outstanding Discuss point; RFC 2046 is pretty clear about the component parts "need[ing] to be in a particular order", which this is in conflict with]

Fixed by Peter.


representation is preceded by its Content-Format ID. The private key can be in unprotected PKCS#8 [RFC5958] format (Content-Format 284) or protected inside of CMS SignedData (Content-Format 280). The

Phrasing it like this makes it soun like the server can just spontaneously decide that it wants to sign the key content, as opposed to having it be dependant on the CSR's contents. Also...

Fixed in text by Peter.


SignedData is signed by the party that generated the private key, which may be the EST server or the EST CA. The SignedData is further protected by placing it inside of a CMS EnvelopedData as explained in Section 4.4.2 of [RFC7030]. In summary, the symmetrically encrypted

.... if the SignedData is not the outermost container, then we don't care what the relevant Content-Format for it is; we only care about the Content-Format for the EnvelopedData.

Also, did we explicitly consider and reject AuthEnvelopedData?

Fixed in text by Peter.


key is included in the encryptedKey attribute in a KEKRecipientInfo structure. In the case where the asymmetric encryption key is suitable for transport key operations the generated private key is encrypted with a symmetric key which is encrypted by the client defined (in the CSR) asymmetric public key and is carried in an

nit: hyphenate "client-defined"

Done in text.


Section 6

The EST-coaps-to-HTTPS Registrar MUST terminate EST-coaps downstream and initiate EST connections over TLS upstream. The Registrar MUST authenticate and OPTIONALLY authorize the clients and it MUST be

Why OPTIONAL? (Also, nit: OPTIONALLY isn't a 2119 keyword; only OPTIONAL.)

I made the "OPTIONALLY" lowercase. Authorization does not need to happen by a Registrar. For some usecases authentication is enough.


client. For example, it could be configured to accept POP linking information that does not match the current TLS session because the authenticated EST client Registrar has verified this information when acting as an EST server.

This is close enough to a literal quote that we might think about actually quoting and using quotation marks. nit: s/POP/PoP/ if we don't do the literal quote.

I added quotes to show this is directly from RFC7030.


For some use cases, clients that leverage server-side key generation might prefer for the enrolled keys to be generated by the Registrar if the CA does not support server-side key generation. Such Registrar is responsible for generating a new CSR signed by a new key which will be returned to the client along with the certificate from

nit: "Such a Registrar"

Fixed in text by Peter.


the CA. In these cases, the Registrar MUST support random number generation using proper entropy.

Not just support -- use!

Fixed in text by Peter.


Additionally, a conversion from CBOR major type 2 to Base64 encoding MUST take place at the Registrar when server-side key generation is supported. [...]

Not always?

Always. Removed the rest of the sentence.


key, the encrypted CMS EnvelopedData blob MUST be converted to binary in CBOR type 2 downstream to the client.

I think we should reword this -- my first reading of "downstream to the client" is "after the client in the processing path", which doesn't actually help the client. Presumably we mean at the registrar, in the downstream direction, towards the client.

Peter added at the Registrar.


The EST-coaps-to-HTTP Registrar MUST support resource discovery according to the rules in Section 5.1. Section 5.1.

Do we need to say anything about translation of discovered URIs?

Yes. There was already normative text saying
Table 1 contains the URI mappings between EST-coaps and EST


Section 7

This section addresses transmission parameters described in sections 4.7 and 4.8 of [RFC7252]. EST does not impose any unique values on the CoAP parameters in [RFC7252], but the EST parameter values need to be tuned to the CoAP parameter values.

I don't understand what "but the EST parameter values need to be tuned to the CoAP parameter values" means.

Addressed in the text by Peter.


o NSTART: A parameter that controls the number of simultaneous outstanding interactions that a client maintains to a given server. An EST-coaps client is not expected to interact with more than one servers at the same time, which is the default NSTART value defined in [RFC7252].

nit: there's a mismatch between "to a given server" and "more than one servers at the same time". (Also, s/one servers/one server/.)

Peter addressed in the text.


o PROBING_RATE: A parameter which specifies the rate of re-sending non-confirmable messages. The EST messages are defined to be sent as CoAP confirmable messages, hence this setting is not applicable.

Section 5.4 only has it as RECOMMENDED to send requests in CON messages, so we should still say something here.

Peter adjusted the text.


Section 9.1

I think we probably need this document as a reference for all the allocations; as the document effectuating the registration, we are still of interest even if most details of content encoding lie elsewhere.

Correct. [This document] was added in all lines.


Section 9.2

The grammar for these entries is a bit stilted, though the existing registrations are not so far off.

nit: should ace.est.att include the word "get" like ace.est.crts does?

Fixed in text.


Section 10.1

The security considerations of Section 6 of [RFC7030] are only partially valid for the purposes of this document. As HTTP Basic Authentication is not supported, the considerations expressed for using passwords do not apply.

It may be worth explicitly stating that "the other portions of the security considerations of RFC 7030 continue to apply".

Peter added in text.


Modern security protocols require random numbers to be available during the protocol run, for example for nonces, ephemeral (EC) Diffie-Hellman key generation. This capability to generate random

nit: the comma expects a 3+ element list but we only have two elements. Just "and" suffices?

Yes, Peter adjusted the text.


Analysis SHOULD be done to establish whether server-side key generation increases or decreases the probability of digital identity theft.

In the abstract sense, this seems like a non-normative "should". But if we make it apply specifically to those deploying server-side key generation then it is appropriately normative.

Right. That is why we kept it normative.


It is important to note that sources contributing to the randomness pool used to generate random numbers on laptops or desktop PCs are not available on many constrained devices, such as mouse movement, timing of keystrokes, air turbulence on the movement of hard drive heads, as pointed out in [PsQs]. Other sources have to be used or

nit: need an "and" (or "or") to close the list.

Peter added or.


It is also RECOMMENDED that the Implicit Trust Anchor database used for EST server authentication is carefully managed to reduce the chance of a third-party CA with poor certification practices jeopardizing authentication. Disabling the Implicit Trust Anchor

We may want to call out that since the implicit database is used for the initial /crts request, that single jeporadized exchange could cause all subsequent exchanges from that client to be compromised as well.

This is self explanatory. No need to add text here.


database after successfully receiving the Distribution of CA certificates response (Section 4.1.3 of [RFC7030]) limits any risk to the first DTLS exchange. Alternatively, in a case where a /sen request immediately follows a /crts, a client MAY choose to keep the connection authenticated by the Implicit TA open for efficiency reasons (Section 4). A client that pipelines EST-coaps /crts request

nit: is "pipelines" the right word here, given that HTTP pipelining is a thing and CoAP pipelining (probably?) isn't, and the former isn't what we're doing anyway?

Peter fixed by using interleaves.


with other requests in the same DTLS connection SHOULD revalidate the server certificate chain against the updated Explicit TA from the /crts response before proceeding with the subsequent requests. If the server certificate chain does not authenticate against the database, the client SHOULD close the connection without completing the rest of the requests. The updated Explicit TA MUST continue to be used in new DTLS connections.

I'm not going to say you shouldn't do this check, but it seems pretty easy for an attacker that knows it's servicing a /crts request to supply a response that includes a (potentially bogus) trust anchor that can certify the certificate it used for the current connection. So it's not clear how much protection this really provides.

This is to comply with the text in RFC7030. After you get the new Explicit TA you need to start using it.


As described in CMC, Section 6.7 of [RFC5272], "For keys that can be used as signature keys, signing the certification request with the private key serves as a POP on that key pair". The inclusion of tls- unique in the certificate request links the proof-of-possession to the TLS proof-of-identity. This implies but does not prove that only the authenticated client currently has access to the private key.

Do we want to further clarify that this means the PoP is weaker than it could be? ("no" is a fine answer, as always.)

Agreed in the mailing list that this was not necessary.


What's more, POP linking uses tls-unique as it is defined in [RFC5929]. The 3SHAKE attack [tripleshake] poses a risk by allowing

nit: "such POP linking" or "the CMC POP linking"

Peter adjusted the text.


a man-in-the-middle to leverage session resumption and renegotiation to inject himself between a client and server even when channel binding is in use. The attack was possible because of certain (D)TLS implementation imperfections. In the context of this specification,

I don't think we can solely blame the attacks on implementation imperfections (though they were certainly compounding factors). Does this sentence really add any value to the current document?

I commented it out.


binding mechanism. Such a mechanism could include an updated tls- unique value generation like the tls-unique-prf defined in [I-D.josefsson-sasl-tls-cb] by using a TLS exporter [RFC5705] in TLS 1.2 or TLS 1.3's updated exporter (Section 7.5 of [RFC8446]). Such mechanism has not been standardized yet. Adopting a channel binding

We probably should be explicit about "using a TLS Exporter value in place of the tls-unique value in the CSR", just from a writing clarity perspective.

I rephrased to by using a TLS exporter [RFC5705] in TLS 1.2 or TLS 1.3's updated exporter (Section 7.5 of [RFC8446]) value in place of the tls-unique value in the CSR.


It might be worth splitting the triple-handshake bits (including open question) into a separate subsection so that we can make a forward reference to it from earlier in the document.

I did not split the tls-unique paragraph into a separate subsection because I am not sure we need to reference it somewhere in the draft. This is a security consideration that offers a potential solution that practically does not exist as a standard, so I am not sure where we can reference it in the draft.


value generated from an exporter would break backwards compatibility. Thus, in this specification we still depend on the tls-unique mechanism defined in [RFC5929], especially since a 3SHAKE attack does not expose messages exchanged with EST-coaps.

I suppose that new endpoint names would be one way to work through the backwards-compatibility break, though it's not entirely clear that we need to say so in this document. We probably do want to say that even though EST-coaps looks like a new protocol that could get away with changing the default, we want to preserve the ability for the RA to proxy through to a "classic" EST HTTPS server, so we are in fact constrained to use the compatible choice.

Good point. I rephrased slithtly to say value generated from an exporter would break backwards compatibility for an RA that proxies through to a classic EST server.


Regarding the Certificate Signing Request (CSR), an EST-coaps server is expected to be able to recover from improper CSR requests.

What does "recover" mean? Is it just "not crash" or is it expected to somehow still be able to issue a certificate? (If the former, that might be implicit in an RFC 3552 threat model, though saying it explicitly probably doesn't hurt.)

It is the former. Agreed. We kept it there.


Section 10.2

The Registrar proposed in Section 6 must be deployed with care, and only when the recommended connections are impossible. When POP

Do we actually explicitly say that the direct connection is recommended anywhere? If not, we should.

This is only to say that if you are going from COAP environments to HTTP CAs you need a registrar. We do not recommend clients going to server directly. I rephrased a little in the text to when direct client-server connections are not possible.


linking is used the Registrar terminating the TLS connection establishes a new one with the upstream CA. Thus, it is impossible

I think technically it terminates DTLS and esablishes a new TLS connection.

Agreed. Peter adjusted the text.


for POP linking to be enforced end-to-end for the EST transaction. The EST server could be configured to accept POP linking information that does not match the current TLS session because the authenticated EST Registrar client has verified this information when acting as an EST server.

I think we need to say that the EST server "assumes" or "trusts" that the registrar has verified this information -- it is to some extent a leap of faith.

We rephrased in the text to clarify it.


The introduction of an EST-coaps-to-HTTP Registrar assumes the client can trust the registrar using its implicit or explicit TA database.

I'm not entirely sold on "trust" as the best word here (vs., e.g., validate), but don't object to it.

Rephrased to client can authenticate the Registrar.


It also assumes the Registrar has a trust relationship with the upstream EST server in order to act on behalf of the clients. When a client uses the Implicit TA database for certificate validation, she SHOULD confirm if the server is acting as an RA by the presence of the id-kp-cmcRA EKU [RFC6402] in the server certificate.

Why is this only a SHOULD?

It is not mandatory. Some clients may trust the server if its cert can be authenticated without caring if it is an RA or not.


generating the key. In such cases, the existence of a Registrar requires the client to put its trust on the registrar doing the right thing if it is generating the private key.

This is true, though it (probably correctly, for our purposes) does not give any indication of what "the right thing" is intended to be :)

Peter adjusted in the text.


Section 11.2

I think draft-ietf-lamps-rfc5751-bis, RFC 5958, RFC 8075, and RFC 8422 should be normative references.

If we're going to say that this spec requires conformance to RFC 7925's DTLS profile (which we currently don't, but I suggest above that we do), it will need to be a normative reference as well.

I don't understand why RFC 7231 is in the 'reference' column for application/csrattrs in Table 4; its presence there would normally suggest that it should be a normative reference.

All addressed in the references by Peter.


RFC 5929 is interesting, as it is of course a normative reference for RFC 7030, which we use normatively, but we may not need to cite it directly as a normative reference from this document.

Addressed in the references by Peter .


It may be worth citing RFC 7525 as BCP 195 where it appears.

Added in the references by Peter


Appendix A

transported in hex, but in binary. The payloads are shown unencrypted. In practice the message content would be transferred over an encrypted DTLS tunnel.

I expect a tsvart reviewer to complain about the use of the word "tunnel" here, and suggest "channel" as an alternative.

Fixed by Peter.


The certificate responses included in the examples contain Content- Format 281 (application/pkcs7). If the client had requested Content- Format TBD287 (application/pkix-cert) by querying /est/skc, the server would respond with a single DER binary certificate.

Just to check my own understanding: this will always appear within a multipart-core container, right?

Yes. Also discussed in mailing list.


Appendix A.1

Option (Uri-Port) Option Delta = 0x4 (option# 3+4=7) Option Length = 0x4 Option Value = 9085 Option (Uri-Path) Option Delta = 0x4 (option# 7+4=11) Option Length = 0x5 Option Value = "est"

This is more for my own edification than the documents sake, so thank you for your time, but what accounts for the "extra" length here? The "est" is three bytes, and what makes up for the other two? Also, I assume that the port value of 9805 is the decimal value, which gets CBOR encoded into two bytes of integer encoding plus the byte with additional information 25 to indicate the two-byte integer, and another byte that I need help accounting for.

You are right. We changed the lengths to exclude counting the quotes. And similarly we used two bytes for the port.


Appendix A.2

Do we want to say anything about the IDevID in the CSR/cert? I note that the breakdown in Appendix C.2 (looks like openssl output) does not decode the otherName (though asn1parse can be convinced to do so).

Text was added The CSR also contains an id-on-hardwareModuleName hardware identifier to customize the returned certificate to the requesting device (See [RFC7299] and [I-D.moskowitz-ecdsa-pki]).


Appendix A.3

I'm having trouble validating the private key in the PKCS#8 component: asn1parse says:

$ unhex|openssl asn1parse -inform der 308187020100301306072a8648ce3d020106082a8648ce3d030107046d30 6b02010104200b9a67785b65e07360b6d28cfc1d3f3925c0755799deeca7 45372b01697bd8a6a144034200041bb8c1117896f98e4506c03d70efbe82 0d8e38ea97e9d65d52c8460c5852c51dd89a61370a2843760fc859799d78 cd33f3c1846e304f1717f8123f1a284cc99f 0:d=0 hl=3 l= 135 cons: SEQUENCE 3:d=1 hl=2 l= 1 prim: INTEGER :00 6:d=1 hl=2 l= 19 cons: SEQUENCE 8:d=2 hl=2 l= 7 prim: OBJECT :id-ecPublicKey 17:d=2 hl=2 l= 8 prim: OBJECT :prime256v1 27:d=1 hl=2 l= 109 prim: OCTET STRING [HEX DUMP]:306B02010104200B9A67785B65E07360B6D28CFC1D3F3925C0755799DEECA745372B01697BD8A6A144034200041BB8C1117896F98E4506C03D70EFBE820D8E38EA97E9D65D52C8460C5852C51DD89A61370A2843760FC859799D78CD33F3C1846E304F1717F8123F1A284CC99F

which doesn't look like an RFC5208 PrivateKeyInfo:

  PrivateKeyInfo ::= SEQUENCE {
    version                   Version,
    privateKeyAlgorithm       PrivateKeyAlgorithmIdentifier,
    privateKey                PrivateKey,
    attributes           [0]  IMPLICIT Attributes OPTIONAL }

  Version ::= INTEGER

  PrivateKeyAlgorithmIdentifier ::= AlgorithmIdentifier

  PrivateKey ::= OCTET STRING

  Attributes ::= SET OF Attribute

due to the lack of OID for privateKeyAlgorithm, etc. (openssl pkcs8 also chokes on it, but I don't have a working example and can't rule out user error there.)

Even that giant OCTET STRING 27 bytes in doesn't seem to match a PrivateKeyInfo:

$ unhex|openssl asn1parse -inform der -strparse 27 308187020100301306072a8648ce3d020106082a8648ce3d030107046d30 6b02010104200b9a67785b65e07360b6d28cfc1d3f3925c0755799deeca7 45372b01697bd8a6a144034200041bb8c1117896f98e4506c03d70efbe82 0d8e38ea97e9d65d52c8460c5852c51dd89a61370a2843760fc859799d78 cd33f3c1846e304f1717f8123f1a284cc99f 0:d=0 hl=2 l= 107 cons: SEQUENCE 2:d=1 hl=2 l= 1 prim: INTEGER :01 5:d=1 hl=2 l= 32 prim: OCTET STRING [HEX DUMP]:0B9A67785B65E07360B6D28CFC1D3F3925C0755799DEECA745372B01697BD8A6 39:d=1 hl=2 l= 68 cons: cont [ 1 ] 41:d=2 hl=2 l= 66 prim: BIT STRING

though the OCTET STRING does have the private key and the BIT STRING has the public key's contents as depicted in C.3 (details of that too boring to show).

So I have to wonder if I'm messing something up, somewhere.

Peter removed the passphrase encryption for the private key.


Section A.4

I'm not sure how useful repeating the RFC 7030 csrattrs is: do we expect (e.g.) brainpoolP384r1 to be relevant for our readers?

No. This is just an example for completeness.


Appendix B.1

and BLOCK option Block2. The minimum PMTU is 1280 bytes, which is the example value assumed for the DTLS datagram size. The example

I'm not seeing how this relates to the rest of the section.

Peter removed.


                              The CoAP message adds around 10 bytes,

the DTLS record 29 bytes. To avoid IP fragmentation, the CoAP Block

The DTLS overhead can also vary based on cipher suite, padding, etc., so a bit more qualification ("we assume", "around", etc.) might be in order.

Added around 29 bytes.


with exponent (2**(SZX+4)) separated by slashes. The Length 64 is used with SZX=2 to avoid IP fragmentation. The CoAP Request is sent

We just said "to avoid IP fragmentation" ten lines ago.

Peter removed.


Should we be using the same Token value in two different exchanges in this document?

It is not important as discussed in the mailing list.


Appendix B.2

In this example, the requested Block2 size of 256 bytes, required by the client, is transferred to the server in the very first request message. The block size 256=(2**(SZX+4)) which gives SZX=4. The

I don't see a Block2 size in the first request message, just Block1:

We meant the 256 in 1:0/1/256.


POST [2001:db8::2:321]:61616/est/sen (CON)(1:0/1/256) {CSR req} --> <-- (ACK) (1:0/1/256) (2.31 Continue)

Also, we seem to have stopped using the "{CSR req frag#2}" notation that we had in the main body text.

Peter fixed.


Appendix C.2

[I mentioned otherName's non-decoding earlier]

This is sample output, we chose not to address and add explanatory text as state above.


Appendix C.3

I find it kind of amusing that we have a "Netscape Comment" in the generated cert :)

Indeed! Nothing to change there.


csosto-pk commented 5 years ago

A separate response to one of Ben's specific questions

When proof-of-possession is desired, a set of actions are required regarding the use of tls-unique, described in Section 3.5 in [RFC7030]. The tls-unique information consists of the contents of

I see the note in the shepherd writeup about converting EST to use TLS exporters rather than tls-unique in a separate update document. Where is that work happening? The discussion in Section 10.1 is helpful (and we could do well to reference it from here) but does not inspire great confidence in the reader that such work will come to fruition.

Sean Turner had expressed interest in submitting something with me about this. He told me he was going to ask for help from Benjamin B. from INRIA as well. That was back in April 2019.

petervanderstok commented 5 years ago

I don't think there are any volunteers for the moment. Panos K. schreef op 2019-09-04 19:25:

When proof-of-possession is desired, a set of actions are required regarding the use of tls-unique, described in Section 3.5 in [RFC7030]. The tls-unique information consists of the contents of

I see the note in the shepherd writeup about converting EST to use TLS exporters rather than tls-unique in a separate update document. Where is that work happening? The discussion in Section 10.1 is helpful (and we could do well to reference it from here) but does not inspire great confidence in the reader that such work will come to fruition.

-- You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub [1], or mute the thread [2].

Links:

[1] https://github.com/SanKumar2015/EST-coaps/issues/150?email_source=notifications&amp;email_token=ADCZGQLVU5T2LSETJIODUGDQH7VQXA5CNFSM4ITUPLZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD54KWDY#issuecomment-528001807 [2] https://github.com/notifications/unsubscribe-auth/ADCZGQJHLN5YWF5262STVI3QH7VQXANCNFSM4ITUPLZQ

csosto-pk commented 5 years ago

Some more comments from Ben's AD review that we addressed after -13 iteration.

I think we also need to at least mandate extended-master-secret to be used on the underlying DTLS connection. (That is, assuming that we don't want to lock down to specific, non-vulnerable, ciphersuites -- RFC 7925 only has TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 as MTI, not MTU.)

Extended-master-secret discussed in the list. Ben' agreed with Michael on that from a crypto perspective, we should do that. From a protocol-specification perspective, we should retain parity with classic EST and only update when it does. So we should probably mostly ignore this other than trying to kick off work on classic EST, and mandating extended-master-secret.


Clients and servers MUST support the short resource EST-coaps URIs.

Are they expected to also support the long EST URIs over CoAP?

No. We used to have it there, but it was removed after we received WG feedback that pointed out that we can't make our minds what to use and we let the implementers use what they want. So we decided to be more specific and pick short URIs only.


In the context of CoAP, the presence and location of (path to) the EST resources are discovered by sending a GET request to "/.well- known/core" including a resource type (RT) parameter with the value "ace.est*" [RFC6690]. The example below shows the discovery over

Is that a literal asterisk, for ace dot est star? (1) Why? (2) It probably merits a mention in the text to confirm it for the reader.

Nothing to fix here. Discussed in the WG list.


encryptedKey attribute in a KeyTransRecipientInfo structure. Finally, if the asymmetric encryption key is suitable for key agreement, the generated private key is encrypted with a symmetric key which is encrypted by the client defined (in the CSR) asymmetric

In the key-agreement case, the symmetric key-encryption key is the result of the key-agreement operation, no? In which case it is not itself encrypted, but rather the server's ephemeral public value is sent.

No. In the asymmetric case the server generated private key is still encrypted with a symmetric key which the client does not know. In order to provide that key to the client in order to decrypt the server uses assymetric encryption. So the server encrypts the symmetric key with the client's public key (assymetric transport key or asymmetric key agreement key). The client decrypts with his private key and is able to get the symmetric key and thus then decrypt the server generated identity public key. Slightly confusing I know...


public key and is carried in an recipientEncryptedKeys attribute in a KeyAgreeRecipientInfo.

[RFC7030] recommends the use of additional encryption of the returned private key. For the context of this specification, clients and servers that choose to support server-side key generation MUST support unprotected (PKCS#8) private keys (Content-Format 284). Symmetric or asymmetric encryption of the private key (CMS EnvelopedData, Content-Format 280) SHOULD be supported for deployments where end-to-end encryption needs to be provided between the client and a server. Such cases could include architectures where an entity between the client and the CA terminates the DTLS connection (Registrar in Figure 4).

This carefully says nothing about recommendations for use, only for software support. Are we letting 7030's recommendation for use of encryption stand? It's probably worth being explicit, either way.

EST mandated two layers of encryption but did not say how the extra encryption can be established. It is counter-intuitive to say we don't trust the DTLS connection and we require more encryption on top of it. Due to how hard it is to establish the keys for the extra encryption and that if the DTLS channel is not secure we have bigger problems, I do not agree this paragraph should be here. I added text to say This document does not strongly recommend CMS encryption on top of the DTLS channel like [RFC7030] unless mandated by the usecase.


It is recommended, based on experiments, to follow the default CoAP configuration parameters ([RFC7252]). However, depending on the implementation scenario, retransmissions and timeouts can also occur on other networking layers, governed by other configuration parameters. A change in a server parameter MUST ensure the adjusted value is also available to all the endpoints with which these adjusted values are to be used to communicate.

I don't understand who this is a normative requirement upon. Is it the network operator's, to propagate configuration changes? Or is there supposed to be some automated protocol that makes adjusted values available?

Point taken and discussed in the list. Text was changed to reflect Ben's recommendation and now reads When a change in a server parameter has taken place, the parameter values in the communicating endpoints MUST be adjusted as necessary.


tenth packet of 63 bytes. The client sends an IPv6 packet containing the UDP datagram with the DTLS record that encapsulates the CoAP request 10 times. The server returns an IPv6 packet containing the UDP datagram with the DTLS record that encapsulates the CoAP response. The CoAP request-response exchange with block option is

The definite vs. indefinite articles here don't seem quite right, and each of the 10 datagrams do not encapsulate the entire CoAP request.

Discussed in the list and converged to the text added in the draft. The client sends an IPv6 packet containing a UDP datagram with DTLS record protection that encapsulates a CoAP request 10 times (one fragment of the request per block). The server returns an IPv6 packet containing a UDP datagram with the DTLS record that encapsulates the CoAP response.

csosto-pk commented 5 years ago

This issue to be closed after Ben K. confirms it and we upload -14 iteration.

csosto-pk commented 4 years ago

A couple of follow ups from Ben K. and how we addressed them

In Section 4, I think we need to put the "for" back in "requests for a trusted certificate list".

ACK. Done.


Also, refresh my memory: did we decide that there's no need to explicitly mandate the use of the "extended_master_secret" TLS extension?

Yes. There is a thread with Michael Richardson, Jim S. about updating EST-coaps to use RFC5705 in the CSR.

Jim had suggested that

If this work is going to happen it needs to be done as an update to the EST RFC. I don't know if it would be better to do that in LAMPS rather than here. Currently I do not know of anybody who is going to do this.This is my issue and I am willing to let it slide for the time being.

Ben said

From a crypto perspective, we should do that. From a protocol-specification perspective, we should retain parity with classic EST and only update when it does. So we should probably mostly ignore this other than trying to kick off work on classic EST, and mandating extended-master-secret.

And Michael agreed

okay, good.

So, we decided to let this slide assuming it would take place in EST.

BTW, I agree with that approach. I think extended-master-secret is a good idea, but I don't see it specific to EST, EST-coaps. I don't think we should add normative language for extended-master-secret as it is assumed for any app using (D)TLS as the underlying tunnel. The (D)TLS requirements EST and EST-coaps RFCs require are mostly related to the protocol itself (CSR, PoP etc) and the environments (ciphersuites for COAPS).


I'd also change the note about supported_groups vs. Supported Elliptic Curves to read "In addition, in DTLS 1.3 the Supported Elliptic Curves extension has been renamed to Supported Groups."

ACK. Fixed.


I think we can move /csrattrs to the bottom of Table 2 (thank you for changing Table 1!).

ACK. Fixed.


With the changes to the example in Figure 3, can you walk me through the block-size negotiation? Quoting:

POST [2001:db8::2:1]:61616/est/sen(CON)(1:N1/0/256){CSR (frag# N1+1)}-->
        |
   ... Immediate response when certificate is ready ...
        |
  <-- (ACK) (1:N1/0/256) (2:0/1/256) (2.04 Changed){Cert resp (frag# 1)}
POST [2001:db8::2:1]:61616/est/sen (CON)(2:1/0/128)           -->
  <-- (ACK) (2:1/1/128) (2.04 Changed) {Cert resp (frag# 2)}

So the ACK to the final fragment of the POST includes (2:0/1/256), or the first fragment of a 256-byte-fragmented version of the response.

Right.

The client then goes and asks for (2:1/0/128), which is the second fragment of a 128-byte-fragmented version of the response. Is that just going to be the last 128 bytes of the thing it already got from the server? If so, is that something it would actually do (e.g., if it had to drop part of the server's response due to a buffer-size limitation) or is it not possible to only have part of a fragment (so it would need to either ask for (2:0/0/128) or (2:2/0/128)?

The client ACKs the first Block2 fragment the server sent, but you are on to something here. The client cannot ask for 128-bytes unless it is his first Block2 request (NUM=0). So, Fig 2 and Fig 3 violated RFC7959 by asking for smaller (128B) blocks when acknowledging the first Block2 from the server. To ask for a Block2 size the client would need to add a Block2 option in his last fragmented Block1 request. That would complicate the example which was not intended for a COAP BLOCK example. So I updated both Fig 2 and Fig 3 to not use /128 and changed the text too. That was a good catch.


It looks like you removed the text about "[the two representations] do not have to be in a particular order since each representation is preceded by its Content-Format ID" based on my remark about core-multipart-ct; that document has since been approved by the IESG and is explicitly confirming that there is no specific ordering requirement (in contrast to multipart/mixed), so we could put that clause back in this document if desired.

Good to know. I added it back.


I consider it more likely than not that a directorate reviewer will want to tweak the added language at the end of Section 5.8 explaining our divergence from RFC 7030; if you want to preemptively reword, my suggestion would be "Although [RFC7030] strongly recommends that clients request the use of CMS encryption on top of the TLS channel's protection, this document does not make such a recommendation; CMS encryption can still be used when mandated by the use case."

Looks good. I updated the text.


All added in https://github.com/SanKumar2015/EST-coaps/commit/d16c53d3360430b5587021dc1a2d31f668c0c0fe#comments

kaduk commented 4 years ago

Also, refresh my memory: did we decide that there's no need to explicitly mandate the use of the "extended_master_secret" TLS extension?

Yes. There is a thread with Michael Richardson, Jim S. about updating EST-coaps to use RFC5705 in the CSR.

Jim had suggested that

If this work is going to happen it needs to be done as an update to the EST RFC. I don't know if it would be better to do that in LAMPS rather than here. Currently I do not know of anybody who is going to do this.This is my issue and I am willing to let it slide for the time being.

Ben said

From a crypto perspective, we should do that. From a protocol-specification perspective, we should retain parity with classic EST and only update when it does. So we should probably mostly ignore this other than trying to kick off work on classic EST, and mandating extended-master-secret.

And Michael agreed

okay, good.

So, we decided to let this slide assuming it would take place in EST.

BTW, I agree with that approach. I think extended-master-secret is a good idea, but I don't see it specific to EST, EST-coaps. I don't think we should add normative language for extended-master-secret as it is assumed for any app using (D)TLS as the underlying tunnel. The (D)TLS requirements EST and EST-coaps RFCs require are mostly related to the protocol itself (CSR, PoP etc) and the environments (ciphersuites for COAPS).

Hmm, I think I was interpreting that exchange differently -- I thought the work that would take place in (on?) EST was to switch from using tls-unique to using TLS Exporters; using extended-master-secret can be done independently of that change and is something we could fairly easily mandate for coap-EST.

I added sentence Implementers should use the Extended Master Secret Extension in DTLS [RFC7627] to prevent such attacks in Section 10.1 where we talk about the 3SHAKE attack that RFC7627 and extended-master-secret aims to protect against.

The Commit is here https://github.com/SanKumar2015/EST-coaps/commit/150b8cab423de7bc34bdcc13fbf01477b30ed5f1

csosto-pk commented 4 years ago

Addressed in https://tools.ietf.org/html/draft-ietf-ace-coap-est-15 . Closing this issue.