ipfs / pinning-services-api-spec

Standalone, vendor-agnostic Pinning Service API for IPFS ecosystem
https://ipfs.github.io/pinning-services-api-spec/
Creative Commons Zero v1.0 Universal
102 stars 26 forks source link

Defining API access controls #6

Closed lidel closed 4 years ago

lidel commented 4 years ago

Known constraints

Q: how specific should this spec be?

v0.0.1 explicitly mentions JWT, but some vendors may prefer to use something else (eg. a simple "SECRET API KEY" generated per "bucket" etc). Clients and user interfaces do not care about what is inside an opaque "api token" – they will simply store it in config during initial configuration and then send it as-is with each request.

For those reasons we may simplify the spec to require an opaque string (in GUIs labeled as "API KEY") passed in HTTP header, leaving its details up to pinning services, especially if we may remove the need for copying api key / token and decide to do authorization at well-known URL at Pinning Service.

Q: How should authorization flow look like?

or

Q: How should authorization token be passed with requests?

Which way of passing authorization credentials makes sense?

Looking for feedback.

Gozala commented 4 years ago

As I've brought up in #7 I think 1:1 association of user to device is flawed. In fact single device might operate multiple service "buckets" and user might have multiple devices that operate single bucket. I think it would make much more sense to rethink the whole flow and instead of copying endpoints + token into WebUI or / and IPFL CLI it would make a lot more sense to navigate from those tools to the pinning service endpoint an perform the authorization there instead.

This would imply that instead of obtaining token from the pinning service and configuring an application, pinning service would need to be configured to authorize a key from the program. Turning things around removes complexity from the web-ui and leaves it up to pinning service to choose right complexity based on service it provides (e.g. if service has buckets it could provide a bucket selector, if it's device in my closet it doesn't even need to authorize me, etc...)

lidel commented 4 years ago

@Gozala if we flip it around, and forget about WebUI for a moment, how would authorization flow look like from CLI? Printing URL to terminal and asking user to open it?

cc @jacobheun @aschmahmann @achingbrain as this impacts API client in go-ipfs/js-ipfs

Gozala commented 4 years ago

@Gozala how would authorization flow look like from CLI?

I imagine something along the lines of git remote maybe ipfs service {name} {url} ? E.g. Running ipfs service add pinata https://pinata.cloud would

  1. Will associate pinata name for pinning service
  2. Will open system default browser with a URL https://pinata.cloud?authorize={pubKey}

User can login-in (or sign-up) if not already logged-in and authorize owner of the corresponding private key to do the remote pinning. Service provide could decide what the UI should look like.

Just like with ipns -k --key could be passed to use non default peer id key.

I imagine pinning service providers would also have a way in settings to paste public key without to do anything from CLI.

Gozala commented 4 years ago

On the second thought I'm not even sure CLI should open a browser tab. Seems like printing instructions would be enough. Web UI on the other hand I think should do that.

aschmahmann commented 4 years ago

I think it would make much more sense to rethink the whole flow and instead of copying endpoints + token into WebUI or / and IPFL CLI it would make a lot more sense to navigate from those tools to the pinning service endpoint an perform the authorization there instead. ... Will open system default browser with a URL https://pinata.cloud?authorize={pubKey}

Will standardizing this across multiple pinning services be a problem? I suspect this isn't quite as simple as having a basic REST API operation.


There are multiple dimensions to this issue that are, IMO, independent including:

1) Should individual devices have their own authorization mechanisms? 2) Should users be copy-pasting data from IPFS to the pinning service, or the pinning service to IPFS? 3) Should we authorize using opaque tokens, or cryptographic keys?

My opinions: 1) Per-device authorization is better from a security standpoint, since it allows for doing device based revocation more easily instead of requiring all user machines to rotate their keys at the same time. However, it can be a bit of a pain for people unfamiliar with the paradigm. I like security and like the control that comes with per-device authorization, but I suspect pinning services would understand their users better than I do.

obo20 commented 4 years ago

As a few have mentioned in this thread, authentication can be tricky and currently varies a lot based on pinning service providers.

With so many different authentication methods I would recommend keeping this part of the spec as least opinionated as possible. @lidel suggested simply doing an API_KEY header and I'm personally a fan of that. Then from there leave it up to the pinning service providers on how they want to handle that.

Gozala commented 4 years ago
  • Pinata's authentication method is currently based off of a "PUBLIC_KEY" / "PRIVATE_KEY" authentication. Essentially users just pass in these keys along with their pin requests and that's what authenticates them.

@obo20 Do PUBLIC_KEY, PRIVATE_KEY correspond to pinata_api_key and pinata_secret_api_key ?

  • Some services allow you to do "bucket" authentication with multiple API keys.

I am not sure if I'm overlooking something here, but it appears that proposed method would work just as well here.

  • Other services utilize JWTs for authentication.

I don't see how JWTs are incompatible either. It's just they would be signing key would be private key that node generated instead of one that it got from the service.

Simply doing an API_KEY header does allow for a lot of flexibility, but it has following trade-offs:

  1. User has to copy & paste things from the service into node config.
  2. It is too easy to end up sharing keys across devices.

Configuring services using keys from client on the other hand:

  1. Avoids copy & pasting things.
  2. Implies different keys on devices that provides
    • Better security
    • Keeps metadata of which device did what
  3. Provides better UX metaphor
    • User create buckets (places) for pins
    • User can associate multiple services (and/or devices) with the same bucket.
sanderpick commented 4 years ago

Heya! @Gozala asked for some textile perspective... I'll start by just outlining how we've thought about this stuff so far.

Harking back to the original questions:

Q: How should authorization flow look like?

Off-band: pinning service generates "token"/"api key" value which is then entered into relevant UIs by the user. It is up to Pinning Service to implement UI for creation and management of those tokens. All requests to the Pinning Service will include this static token.

This type of authorization is essentially a service password, which works best from secure environments like a server. It can be cumbersome in web / mobile apps where the key can be easily intercepted. Textile buckets uses API keys in this way, but allows the key creator (the dev) to decide whether or not the service should demand a signature using an API secret which can be validated. This was inspired by the encoding platform transloadit. During development, devs can skip the extra security step so they can focus on building their app.

Seamless: Pinning service provides authorization endpoint that can be opened to add specific PeerID to allowlist. Then, use private libp2p-key for signing requests to the Pinning Service.

This is more like authentication which can be paired with an access control list. ThreadDB uses a similar paradigm which starts when a peer (or any identity) gets a token from a threaddb host peer. The token (JWT) is valid proof that they are who they say they are as far as thay threaddb host is concerned, which is fine since they can independently verify the access control rules (as in your allowlist above).

Textile buckets are a threaddb collection. So, advanced devs may consider access control for shared buckets. Though, we probably can't assume a generic pinning service will have an access control paradigm, which makes it sounds a lot more like a normal API service (like a Transloadit or similar). In this case, the normal API Key + Secret approach sounds right, where those things are just random bytes or UUIDs. I'm not sure what else a JWT would buy here, considering the usual trade-off: JWTs are stateless (not stored by the service), which makes revocation impossible without rotating the service signing key, whereas the normal approach allows the service to just delete/invalidate the API key.

Q: How should authorization token be passed with requests?

+1 for using Authorization: Bearer <key>. A signature could be added under something like X-API-Signature or similar.

aschmahmann commented 4 years ago

I would recommend keeping this part of the spec as least opinionated as possible

@obo20 unless I'm misunderstanding you we do have to actually make some choices here since we'd like to be able to say that a single client implementation works against all pinning services.

simply doing an API_KEY header and I'm personally a fan of that

How would this be compatible with your Public/Private key authentication? The client would still need to sign some structured message for this to work.

If the folks working on pinning services have use for multiple of these authentication mechanisms we could theoretically mandate support for in clients for multiple implementations (e.g. my understanding of Textile's optional signing), but it has to be something we plan for, and embed, in the clients instead of just leaving it up to each service.

@sanderpick are you advocating for supporting both signed messages and opaque tokens?

Simply doing an API_KEY ... User has to copy & paste things

As I mentioned in my comment above these components can operate independently. If we prefer to have adding a pinning service to ipfs automatically send the API_KEY to the pinning service (and open a web browser) we can do that. IIUC, there's nothing about the API_KEY that means it has to be copied from the service to the device and not the other way around.

Gozala commented 4 years ago

IIUC, there's nothing about the API_KEY that means it has to be copied from the service to the device and not the other way around.

If service generates API_KEY and not the client, then it has to be copied from server to client otherwise how is server going to let the client know ? If client generates key then it call pass the key to the server because it knows the address for it.

mikeal commented 4 years ago

There’s a lot of good ideas here, but what I think needs to be consider is: how many problems are you willing to take on?

Signing is not a solved problem. There is not a standardized and widely adopted signing spec for this sort of thing, the closest would be OAuth1 and I don’t think anyone wants to do that.

Authorization: Bearer <key> has the benefit of limiting the problems you’re taking on. If you accept it via header and querystring you’d have a fast path to adoption in pretty much any client. This is something trivial to add, and since you’re not defining how tokens are acquired people can write whatever flow they like around it.

Since it’s an opaque token, you can use JWT if you want, or not, we wouldn’t be locking anyone in to any particular decision which is probably the right approach.

obo20 commented 4 years ago

@aschmahmann By least opinionated as possible I meant that I'd like to see this specced in a way that is as flexible and "backwards compatible" with existing pinning services as possible.

We can afford to make smaller changes to our authentication right now, but any large scale refactoring of our authentication system would likely prohibit us from being able to adopt this standardized API in the time frame desired.

After further reading / considering the Authorization: Bearer <key> method that @mikeal and others are suggesting seems to be the best approach to me. It allows most (if not all?) existing pinning services a fairly easy adoption path.

For example, on our end it shouldn't be too crazy for us to have additional logic that could read a bearer token of the format: pinata_public_key.pinata_private_key. Then in the future as our API continues to mature we can move towards more "web-UI focused" authentication methods like the ones that @Gozala has been talking about.

aschmahmann commented 4 years ago

bearer token of the format: pinata_public_key.pinata_private_key

If pinning services that use private keys to authenticate are prepared to do that then we're back at opaque authentication tokens so no problem. However, sending private keys to the server is not generally what's expected with private keys so just want to make sure we're on the same page there.

If we go with bearer tokens then pinning services that want to move towards actually using device-based cryptography are out of luck. I don't feel super strongly about this, but just want to make sure everyone is on board.

sanderpick commented 4 years ago

@sanderpick are you advocating for supporting both signed messages and opaque tokens?

@Gozala I was advocating for the normal API key + API secret approach. A secret is used to protect clients from having their key stolen. This is just HMAC authentication added to your proposed API key. Better described here: https://transloadit.com/docs/#signature-authentication

sanderpick commented 4 years ago

So, if I'm understanding your definitions correctly, this is an opaque token + HMAC. Though, as it is in textile, it should be up to the service whether or not they care about the added security.

Gozala commented 4 years ago

I want to take a step back here. My primary motivation was to following:

Most of our discussion ended up around how to authenticate Secret Tokens VS Signing. I recognize that switching from Token based authentication to signing based one (as I have proposed) requires non trivial changes to existing services and seems like a stretch.

Therefor I would like to decouple authentication from the original motivation: Allowing (service) client to generate authentication token (or key) instead of making (service) provider generate a token / key as that addresses some of my original motivation:

lidel commented 4 years ago

Now that #14 is merged, and we got the basic minimum (support for Authorization: Bearer <key>) we should discuss ways to improve UX of adding services, namely avoid manual copying of authorization token from pinning service to WebUI interface. Having that capability in the initial version of this API would be a big win for onboarding user experience and security (easier to revoke single app/device than change api key everywhere).

To illustrate UX needs, adding pinning service to WebUI in IPFS Desktop app could look like this:

  1. User goes to Settings screen and clicks on "Add Pinning Service"
  2. List of predefined services is shown (there is also "Custom" option for manual config)
  3. User clicks on one of predefined providers
  4. WebUI opens authorization page at Pinning Service (PS) using well known API endpoint (passing any data that is required)
  5. PS provides interface for "granting pinning permissions to the app X" (page could also enable user to create a new pin space, or attach WebUI to existing one)
  6. Upon user approval via PS interface, WebUI is able to use configured pinning service (without copying anything manually)

Qs:

momack2 commented 4 years ago

Reminder that we intentionally scoped the difficulty and implementation effort of this pinning services integration to use an API token provided by the pinning service. This is what IPFS Pinning Services currently support and helps constrain the problem area to a simple V0. While ideas for the future are welcome, they should not change implementation and scope for the version we're implementing now. (aka - I'd relabel this issue to P2 to indicate that this discussion here are all for future versions)

aschmahmann commented 4 years ago

@momack2 saying the UX of copy-pasting the API keys into the user's config file is the UX we want for now is fine by me.

However, I wanted to clarify that the direction this conversation has taken is not about where the API tokens come from, but more about how we get it into users' devices. There are some related security discussions around how API tokens are generated and are added + saved in the application (i.e. go/js-ipfs), but they can be handled mostly independently from how automated the process is of taking the key `IAMALARGERANDOMSTRING12345" and getting it into the user device.

As I mentioned if this is still considered P2 then no complaints from me, but wanted to make sure we're on the same page.

lanzafame commented 4 years ago

I haven't read all the comments but I thought it may help to share my original thinking around auth and the pinning service API.

Firstly, the reason for the JWT. JWTs allow for an independent verification of the authenticity of the token without a runtime dependence on an external service. They also already specify a mechanism for the passing of custom metadata that the pinning service needs, such as a username.

Secondly, the authentication flow, I never wanted to specify an actual authentication protocol such as OAuth or OpenConnect as that may very well be overkill for a lot services. So my solution was that any service would just need a separate API that handled users however they wanted and as long as it provided the client application with a JWT the pinning service would be able to function independently. This meant that projects like ipfs cluster could support the authentication of requests and also the extraction of metadata without needing to implement an actual user system, things like user registration, password management, 2factor auth, etc.

Hopefully, this is helpful to understanding the initial intentions of the design.

lidel commented 4 years ago

We decided to limit the scope of MVP to copying opaque string with "secret key" (#14), as noted by @momack2 in https://github.com/ipfs/pinning-services-api-spec/issues/6#issuecomment-657930922

I am closing this in favor of https://github.com/ipfs/pinning-services-api-spec/issues/34 and #6. Feel free to continue discussion there.

obo20 commented 4 years ago

@lidel I was reading through the spec as I'm starting to test out implementing this and noticed this line: https://github.com/ipfs/pinning-services-api-spec/blob/4264744add97c50a7e26a0cfe2896e8cb07849d9/ipfs-pinning-service.yaml#L98

It's my understanding that we're not doing per device authorization just yet. Could this line be modified to state that?

lidel commented 4 years ago

@obo20 we are not doing advanced flow, but would like to keep this language in the spec.

The goal here is to at least steer pinning services into making user interface decisions that enable users to create more than one token.

Framing it around devices is easier to reason about than a dry "user should be able to create and manage multiple independent tokens".