Open bblfish opened 8 years ago
@dlongley and @msporny seem to favor Signature-Date
over Signature-Time
. I am ok with that.
I'd just point out two small issues that could favour Signature-Time
:
Date
header: this one is ISO8601 formatSignature-DateTime
would be precise, but it's really a bit long for a headerWhat does the larger community think? For the moment Signature-Date
is winning 2 to 1 and so I have changed my code and the above explanation to use that.
I have now implemented client and server parts of the protocol in Scala and Scala-JS respectively and have gotten it to work. I have verified that I can display pages served from localhost that then fetch and intercept the 401 from https://joe.example:8443/ and make a new request that succeeds. Currently the request fails again because the access control rules on the server don't know how to give rights to a WebKey identified user, which is on my list todo next. This works in Chrome Canary and Firefox developer edition. I have not had time to test it more widely.
The current server code is here:
The core specification implementation part:
Authorize: Signature...
headerThe web facing server layer
Link
header presumably )The client:
WWW-Authenticate: Signature...
header is found. Here is my experience using the Fetch API from https://joe.example
to https://jane.example
web crypto is that it requires 3 requests:
1) the browser first makes a GET
call which returns the 401
with CORS headers.
It is good news that the first call is actually a GET
and not an OPTIONS
as this would lead one to think that for following calls if the client submits the correct cookie, this will be the only needed call. Also it suggests that perhaps the client can in future sessions make calls to the same server by immediatly starting off with an Authorization:
header, so avoiding this following steps.
2) the browser then makes an OPTIONS
call which returns a 200
with the same CORS headers
This is weird. Should it not just use the CORS headers from the 401
response in the previous step?
3) the JS API intercepts the initial 401
call made by the browser's GET
in 1) and after adding a Authorize: Signature...
header makes a new call that should return a 200 ( in this case it still returns a
401 because the server is not set up properly ).
The server could then set a cookie. And from then on for that server each request would only be the OPTIONS
call followed by the actual call.
But if so who of the browser or the application should manage the cookie?
The only weird thing here is the additional OPTIONS call, which does not seem necessary in the case
of a GET
which returns the correct headers. ( Perhaps the server does not? )
The only advantage of a CORS proxy would be that the proxy could act as a cache for remote resources, and provide a consistent interface for resources, such as a SEARCH
method on resources, pre-fetching of resources, until those types of functionalities are more widely available.
You can cache the preflight on a per-resource basis. Caching it for an entire site might happen someday, but is not possible today.
Ok: once authentication happens and cookies are set then one only needs 1 connection for GET
requests. This is very good news.
Still this means that the browsers on the first unauthenticated request arguably make one connection too many: the OPTIONS
that follows the initial 401
is not necessary if the original 401
returns the correct CORS headers. ( see the previous snapshots )
On to cookies. It is easy to have the server set a signed cookie for the WebKey. The client needs to ask for credentials
in the request, as shown in the following scala-js code
val requestInit = literal(
headers = literal( "Accept" -> rdfMimeTypes ),
requestCache = RequestCache.reload,
credentials = RequestCredentials.include //<- does not work if server's Access-Control-Allow-Origin is set to *
).asInstanceOf[js.Dictionary[js.Any]]
val request = new HttpRequest(proxiedURL.toString, requestInit)
The server needs to also make sure the Access-Control-Allow-Origin
header is set to the origin, or the JS will throw an exception:
The question is now wether it is actually a good idea to allow JS apps to use the normal cookie mechanism. What are the dangers? This would then allow any JS to act on the LDP resources. It may be better if the user were to allow JS from different origins at a time. This could be done by the server setting cookies for each origin with a Set-Origin-Cookie
and the browser JS adding an Origin-Cookie
header, that would act exactly like a Cookies, but these would be under full control of the origin, which could store them in IndexDB, or local storage. The server's access control rules could then allow access to certain resource for any key allowed by the user - ( WebID authentication over HTTP Signature - which would work like WebID-RSA ) - and some resources only to some keys, and others to the browser itself.
@bblfish - what do you see are the advantages of HTTP-Signature over the proposed WebID-RSA mechanism?
thanks @dmitrizagidulin for the question.
+1
On Fri, 5 Feb 2016 8:58 PM Henry Story notifications@github.com wrote:
thanks @dmitrizagidulin https://github.com/dmitrizagidulin for the question.
1.
The main advantage is that HTTP-Signature is already in RFC format https://tools.ietf.org/html/draft-cavage-http-signatures-05, and has the support of players such as Oracle, Amazon, Digital Bazaar (@msporny https://github.com/msporny), and others... and has gone through an RFC process already. So we would have to do all that work to end up in exactly the same space, by specifying this ourselves. And we have to do that work if we are going to have any chance of being taken seriously. It's one less thing people can criticise us with. 2.
HTTP-Signature has a few more features than WebID-RSA, which will allow us to answer criticisms more easily.
It allows one to sign any number of headers, it's easier to fix things if necessary without breaking other implementations. And we don't quite know what's out there on the web.
- Also if someone discovers a problem with RSA - some backdoor nobody knew about - then it would be easy to switch with minimal change to a new crypto algorithm. 3.
There is no reason we should not in SoLiD specify a subset of HTTP-Signature as the one we require implementations to understand. So for example we can specify that we expect SoLiD implementations initially to only implement the RSA algorithm. We can do this to make it easier for people to get the basics going, and to make testing simpler.
— Reply to this email directly or view it on GitHub https://github.com/solid/solid-spec/issues/52#issuecomment-180278844.
HTTP-Signature would be great, especially since TLS client certifiactes are somewhat problematic with HTTP/2.
Moving this to solid/issues soon.
The HTTP-Signatures spec has a github repo now and a list of implementations.
Btw, we do have a Spec for using Signing HTTP Messages (developed at the IETF now) with Solid called HttpSig .
I have an implementation in Scala for the IETF Signing Http Messages 07 in the httpSig repo. Currently it works with JVM based Akka. I going to try to get it to work for http4s next so I can use it in the the browser with JS - and it could also be made to work on nodejs.
My EU finding is coming to an end, so if anyone has real needs for other implementations this is the best time to contact me. I think it should be possible to make releases even for Servlets... :-)
@dlongley and @msporny's draft-cavage-http-signatures-05 has a few implementations, and is written in a style that has chances to get adoption by the IETF. It is generic enough to satisfy a wide set of use cases. And there is strength in numbers. This suggests that we use that as the basis for WebID-RSA ( though it could do with a better name see issue 5 ).
Of course we need to see if it works for us. That is what I have been working on:
What the SoLiD spec could suggest is a number of headers to use for authentication. Here is my first proposal. The SoLiD spec could suggest that client sign at least the following headers:
User
header for the WebID, when the user has one. This avoids an intermediary adding a WebID to the request and pointing the WebID profile to the WebKey document. ( Assuming the WebKey document does not point to the WebID for reasons of privacy )Signature-Date
(name open for discussion). This header is needed so that the message cannot be used in a replay attack - it forms a good nonce. The server could also verify that this date be within a couple of seconds at most of the actualDate
header sent by the browser. We are doing this over TLS, but these headers could end up in the log, and those logs could be stolen.(request-target)
seems like a good idea too. ( can that in some way be thought of as the nonce? )Host
The Authorization header would then look like this
note:
\
above indicate that it and the newline character that follow are just for display purposesBase64(RSA-SHA256(signing string))
is a function on a signing string specified below which would be something like:What SoLiD adds to Http-Signature then is the definition of
Signature-Date
which gives the time of the Signature, with the requirement that it be no more than a few seconds out of sync with the real date, and the requirement ofUser
for the WebID when the user has one (which is already something SoLiD uses).This would then also require the WebId Working group at some point to define WebID verification given a key and such a header (not difficult: just need to dereference the WebID and see if it points to the key I think).
The Initial SoLiD server's request that could have launched this would then have looked like this:
The definition of the
Signature-Date
header is the precise Time in ISO 8601 format w/ sub-millisecond precision when the header was signed. You can get this in JS withThis would need to be added to the IETF header registry referred to by section 8.1 of RFC 7230 on HTTP 1.1.
[1] The core verification code, without the HTTP header setting: