Open joshuakarp opened 3 years ago
In addition to rebuilding OAuth2 server from scratch, there's a related problem about having an HTTP API.
GRPC is compatible with HTTP API if there's a gateway that translates our GRPC calls to HTTP.
Having an HTTP API will make it easier for Polykey agent to be integrated with other kinds of clients like Web browsers which don't support GRPC.
Right now the GRPC ecosystem has a thing called "grpc-gateway" which is a go-based server that generates HTTP API from a common protobuf spec with GRPC. However this is written in Go and cannot be integrated easily into our Node ecosystem.
We can manually create an HTTP API and call the same internal domain methods but this does add additional maintenance cost as we now have to maintain conformance between the GRPC API and the HTTP API.
Note we have past issue about using fastify instead of express for implementing an HTTP API: https://gitlab.com/MatrixAI/Engineering/Polykey/js-polykey/-/issues/204
Therefore:
I'm going to elevate this issue into an Epic, as it is quite relevant multiple potential subissues.
I'm removing these dependencies and they deserve a review on whether they are needed to implement this:
"express": "^4.17.1",
"express-openapi-validator": "^4.0.4",
"express-session": "^1.17.1",
"js-yaml": "^3.3.0",
"jsonwebtoken": "^8.5.1",
"oauth2orize": "^1.11.0",
"passport": "^0.4.1",
"passport-http": "^0.3.0",
"passport-http-bearer": "^1.0.1",
"passport-oauth2-client-password": "^0.1.2",
"swagger-ui-express": "^4.1.4",
"@types/express-session": "^1.17.0",
"@types/js-yaml": "^3.12.5",
"@types/jsonwebtoken": "^8.5.0",
"@types/oauth2orize": "^1.8.8",
"@types/passport-http": "^0.3.8",
"@types/passport-http-bearer": "^1.0.36",
"@types/passport-oauth2-client-password": "^0.1.2",
"@types/swagger-ui-express": "^4.1.2",
"swagger-node-codegen": "^1.6.3",
There are way too many dependencies just to implement a HTTP API from the old code. Any new code should focus on something like this:
fastify
or 0http
- as minimal as possible because we are doing very basic things here in our http server, we just want to route it like the handlers we have for our grpc client serverjose
, this is something we should be able to implement ourselves directly without any additional dependencies (our jwt tokens should mean that our authentication is stateless, and it is expected to be put on the HTTP request headers)fastify
would be better https://github.com/fastify/fastify-swaggerHere is the old code that can be revitalised when we investigate this:
Some extra thoughts on this after reading: https://news.ycombinator.com/item?id=28295348 and https://fly.io/blog/api-tokens-a-tedious-survey/.
sessions
domain is the starting point of this epic. Any development should evolve from that. MatrixAI/Polykey#204 MatrixAI/Polykey#211 It's important for all services to make use of TLS when connecting to PK. This is true for PK to PK, CLI/GUI to PK, and third parties to PK.
However only PK to PK is expected to be based on mtls, whereas CLI/GUI to PK is just tls. Third parties the same as well. The main reason for this is that you need a PKI system to bootstrap mtls.
To have mtls before mtls, is a chicken or egg problem. Thus bootstrapping an mtls-based inter-service architecture requires a PKI that itself doesn't require mtls to connect to to start working on. Bootstrapping mtls is a wiki use-case problem, and it's part of us exposing more cryptographic operations to the end user which will require solving: MatrixAI/Polykey#155
Storage and management of session tokens and eventually these HTTP API tokens are currently done in directly on our DB. It does not make use of our vault/EFS abstraction.
If we want dog-food our own creation and secrets management system, does it not make sense to reuse the vaults for storing the session tokens, if we expect users to use the vault system to manage their inter-service API tokens? It would be a strong test of our user experience. Bonus points for making PK use PK (a self-hosting secrets management system, uses its own secrets management system to manage its own secrets) thus a higher-order PK.
The current limitation is the lack of schemas that make vaults an arbitrary file system. Plus we haven't fully settled on the vault API. So these issues will be relevant and all relevant vault-related APIs:
This would enable all of the usecase workflows for our own session tokens. For example:
@scottmmorris
Related to MatrixAI/Polykey#235 in having a GRPC-web gateway, there are also projects that enable a GRPC-gateway to HTTP Restful API.
For PK GUI usage, the main importance is to avoid bridging via electron IPC, and exposing rich streaming features to the front end, we don't actually want to use a raw HTTP API. This means going down one of 2 routes:
For mobile clients, I imagine there will be fiddling required for GRPC as well, and possibly using GRPC-web as well.
If we change to using JSON RPC over HTTP1.1 & HTTP2, I believe the oauth situation will likely be alot more simpler to implement. And as a "open" decentralized system, it would be easier for third party, and end-user systems to integrate over the network to a polykey node.
GRPC seems to be focused on being an "internal" microservice protocol (where it is expected that a centralised entity (company) controls the entire infrastructure), it's not really something that "external" users can easily integrate into.
In relation to blockchain identities MatrixAI/Polykey#352 and the DID work https://www.w3.org/TR/did-core/ in W3C, I want to point out that the oauth2 protocol and associated openid connect protocol is also related to this overall concept of "federated identity" https://en.wikipedia.org/wiki/Federated_identity.
Additional details: https://medium.com/amber-group/decentralized-identity-passport-to-web3-d3373479268a
Our gestalt system MatrixAI/Polykey#190, represents a decentralised way of connecting up identity systems.
Now we can research and figure out how to position Polykey relative to all these different identity systems and protocols are being developed currently. But there's something here related to our original idea of "secrets as capabilities".
Recently I discovered that Gitlab CI/CD system supports using OIDC connect protocol as a way to bootstrap trust between the gitlab runner, the CI/CD job, and a secrets provider (like hashicorp vault or aws STS MatrixAI/Polykey#71), and then allows the CI/CD job to request secrets that are scoped to the job (such as accessing some AWS S3 resource).
Documented:
This is basically an implementation of the principle of least privilege. Consider that the CI/CD job is being "privilege separated" by not being given "ambient authority" due to environment variables, and it is "privilege bracketed" because its secret-capabilities are only useful for that specific job, and once the job is done, it experiences "privilege revocation" either by having the token expire, cryptographically-one-time-use through a nonce, or through some state change in the resource controller.
AWS recommends against storing "long-term credentials", time expiry is a naive way of implementing privilege bracketing, since it is technically only bracketed by the time-factor/dimension and the "space dimension" by where it is given.
One of the useful things here is that the secret provider is able to change or revocate the secret after it is given to the user of that secret. This is basically a proxy-capability, which is how capsec theorises how to revocate capabilities.
This basically validates the idea that capabilities used in a distributed network context ultimately requires a "serialisation" format. And the serialisation of a capability is ultimately some sort of secret token. Using smart cryptographic logic, the embedding of "logic" into the token itself via JWT (structured token), you're able to build in "smarts" into the token that can function as a "capability". You might as well call this a "smart token" as opposed to dumb tokens where they are just a random identifier that is checked for equality in some session database.
This is very exciting, we are on the cusp of building this into Polykey. But it does require us to first figure out the trust bootstrap (gestalt system) and how it integrates into decentralised identifiers and OIDC identity federation, and then making usage of structured smart tokens (like JWTs) to enable logic on the tokens.
Now back to the topic of OIDC identity provider. AWS describes their "web identity federation" as:
IAM OIDC identity providers are entities in IAM that describe an external identity provider (IdP) service that supports the OpenID Connect (OIDC) standard, such as Google or Salesforce. You use an IAM OIDC identity provider when you want to establish trust between an OIDC-compatible IdP and your AWS account. This is useful when creating a mobile app or web application that requires access to AWS resources, but you don't want to create custom sign-in code or manage your own user identities.
This could describe any existing identity system that wants to allow external systems to interact with identities. Even things like "login with GitHub" is basically allowing a third party system to interact with identities on GitHub, and delegate the responsibility of managing identities to GitHub. But the depth of what it means to interact of identities goes deeper than just SSO. And this is what PK addresses.
And to be clear, Open ID Connect is OAuth 2.0 with extra features. So it's basically saying AWS supports OAuth2, login with AWS, and then use AWS resources in third party apps, but you can also do this with software agents, it doesn't have to be human-people.
Here's an interesting way of explaining this whole idea of "smart tokens". It was my journey through this.
Many years ago we start with CapSec - capability based security. It started some controversy and criticisms came in.
Then came the Capability Myths Demolished paper: https://blog.acolyer.org/2016/02/16/capability-myths-demolished/. It criticised the criticism. Also see: https://news.ycombinator.com/item?id=22753464. The paper itself is quite readable.
Then came c2 wiki: https://wiki.c2.com/?CapabilitySecurityModel.
In it, it explained that the decentralized world of the internet ultimately lacks a "central authority" (i.e. the kernel in a operating system) that forges and is the origin of the capabilities http://wiki.c2.com/?PowerBox, and thus one must be transferring "tokens" as reified capabilities that can be sent between decentralized programs.
The problem is that our tokens are just dumb strings that have to be matched for equality in some database. They didn't really satisfy alot of the cool things you can do in a capability operating system. And you could say they were easy to understand, and so everybody ended up creating their own ad-hoc session system without fully understanding all the implications and what it could be capable of.
So slowly we realised that these dumb tokens can become smarter by embedding information into the token. Like the HMAC protocol.
Further development led to the "macaroon" idea: https://github.com/nitram509/macaroons.js. A cookie.
Today we can instead talk about JWT, and I believe JWT has taken over. https://neilmadden.blog/2020/07/29/least-privilege-with-less-effort-macaroon-access-tokens-in-am-7-0/
However how JWTs can be used is still a point of R&D. And how JWTs could be used as "smart tokens" that realise the ideals of capsec across decentralised services is still being developed. I believe we can rebrand capabilities as "smart tokens" and it would have the cultural cachet of "smart contracts".
It'd be interesting to see how these smart tokens can be computed against, and how these smart tokens enable ABAC, and most importantly revocability which requires indirection (https://news.ycombinator.com/item?id=22755068).
Here's a real world example of this problem.
Both GitHub and GitLab support webhooks. Webhooks are a great way of allowing web-services to provide "push" based configuration mechanism.
Right now we have GitLab mirrors pulling GitHub, and it does it by polling GitHub. One of the advantages of push based configuration is the ability to avoid polling and to have minimal delay so that events arrive faster. A sort of "best-effort delivery" and reliable delivery.
So to make GitHub push to GitLab, we can configure a webhook on GitHub.
This requires GitLab to have an API that supports a trigger to pull on a project mirror.
This API call is here: https://docs.gitlab.com/ee/api/projects.html#start-the-pull-mirroring-process-for-a-project
GitHub has a great webhook panel for debugging webhooks.
But the problem now is setting secrets.
Apparently there is standard for secret passing on webhooks defined by the WebSub protocol, but not all providers support it. At any case, Gitlab suggests using the ?private_token=...
query parameter.
The resulting web hook looks like:
https://gitlab.com/api/v4/projects/<PROJECTID>/mirror/pull?private_token=<PERSONAL_ACCESS_TOKEN>
Now you need an access token.
The problem is that this token creation has to be done on GitLab, and during the creation of the token, you need to grant it privileges, obviously we don't want to give GitHub ENTIRE access to our API.
Maintainer
level, and you need the full api
scopeThis privilege requirement is TOO MUCH. The token isn't really safe, it's just sitting in plain text in the webhook UI settings of the project. The token is at maintainer level thus being capable of deleting projects... etc, and finally it's the entire api
, rather than limited to a specific API call.
The ambient authority of this token is too much for something that creates a minor benefit (making our mirrors faster), and for something that is handled this insecurely.
Thus making this integration too "risky" to implement.
Therefore having "smart tokens" would reduce the marginal risk of implementing these sorts of things so we can benefit from more secure integrations. One could argue that integrations today are limited by the inflexibility of privilege passing systems.
Example of the world moving towards passwordless authentication (Apple's keychain supporting it): https://news.ycombinator.com/item?id=31643917. Thus relying on an "authenticator" application, and not just a OTP style authenticator, but basically the holder of your identity sort of thing.
Recently github had a token breach issue (which is a clear example of secret management problem usecase):
We're writing to let you know that between 2022-02-25 18:28 UTC and 2022-03-02 20:47 UTC, due to a bug, GitHub Apps were able to generate new scoped installation tokens with elevated permissions. You are an owner of an organization on GitHub with GitHub Apps installed that generated at least one new token during this time period. While we do not have evidence that this bug was maliciously exploited, with our available data, we are not able to determine if a token was generated with elevated permissions.
A vulnerability for about 1 week between 25th of Feb to 2nd of March.
GitHub learned via a customer support ticket that GitHub Apps were able to generate scoped installation tokens with elevated permissions. Each of these tokens are valid for up to 1 hour.
GitHub quickly fixed the issue and established that this bug was recently introduced, existing for approximately 5 days between 2022-02-25 18:28 UTC and 2022-03-02 20:47 UTC.
These tokens are used by third party apps when you want to use them with GitHub.
GitHub Apps generate scoped installation tokens based on the scopes and permissions granted when the GitHub App is installed into a user account or organization. For example, if a GitHub App requests and is granted read permission to issues during installation, the scoped installation token the App generates to function would have
issues:read
permission.This bug would potentially allow the above installation to generate a token with
issues:write
, an elevation of permission from the grantedissues:read
permission. The bug did not allow a GitHub App to generate a token with additional scopes that were not already granted, such asdiscussions:read
in the above example. The bug did not allow a GitHub App to access any repositories that the App did not already have access to.
So basically, apps would request a token with issues:read
permission/capability. But some how, the apps would be able to acquire a token that had issues:write
capability instead. If customers only saw issues:read
, they basically granted an app with extra capabilities then they intended. It was of course an "elevation" of a capability, but not "extra" scopes as they would say it, because permissions are hierarchically organised, in terms of "scope" and and the elevation of a permission of the scope itself.
In order to exploit this bug, the GitHub App author would need to modify their app's code to request elevated permissions on generated tokens.
I'm guessing this is part of the "third party" oauth flow. Where third party apps can be installed on the platform in this case GitHub, say they want only issues:read
but end up acquiring issues:write
instead.
GitHub immediately began working to fix the bug and started an investigation into the potential impact. However due to the scale and complexity of GitHub Apps and their short-lived tokens, we were unable to determine whether this bug was ever exploited.
Of course due to a lack of auditing of token usage, there's no way to tell if this elevated token was ever used, or if any apps took advantage of these elevated permissions.
We are notifying all organization and user account owners that had GitHub Apps installed and had a scoped installation token generated during the bug window so that they can stay informed and perform their own audit of their installed GitHub Apps.
As a followup to this investigation, GitHub is looking at ways to improve our logging to enable more in-depth analysis on scoped token generation and GitHub App permissions in the future.
Exactly more auditing of smart tokens.
<Reference # GH-0003281-4728-1>
In summary, GitHub's problem is 2 problems:
Can a software solve problem 1 and 2?
In terms of 2, yes a software can definitely solve that, but not traditional logging software. Traditional logging software actually results in secret leaks because people forget to wipe passwords/tokens from logs.
Secret-usage auditing/logging requires a purpose-built logging software designed to audit secret usage in particular. It's a far more complex workload than just non-secret logging.
As for problem 1, the idea that a software can solve this problem is a little bit more nebulous. This is because the problem is an intersection of different interacting systems, and all parts of the system have to coded securely for the entire protocol to work securely. One cannot say that the application of a piece of software will eliminate these protocol bugs, because it depends on the correctness of the entire system, not just the sub-pieces. Security is inherently something that is cross-functional and intersectional between machine to machine and human to machine.
But we understand that software can be 2 things: Framework or Library.
A library itself cannot solve this intersectionality problem.
However a framework can provide the structure so that the intersectionality performs/behaves according to a logically/securely-verified contract.
This reminds me of Ory and Kratos.
Ory Kratos is a fully customizable, API-only platform for login, two-factor authentication, social sign in, passwordless flows, registration, account recovery, email / phone verification, secure credentials, identity and user management. Ory Hydra is an API-only, "headless," OAuth 2.0 and OpenID Connect provider that can interface with any identity and user management system, such as Ory Kratos, Firebase, your PHP app, LDAP, SAML, and others. Ory Oathkeeper is a zero trust networking proxy and sidecar for popular Ingress services and API gateways. It checks if incoming network request are authenticated, and allowed to perform the requested action. Ory Keto is the world's first and leading open source implementation of Google's Zanzibar research paper, an infinitely scalable and blazing fast authorization and permissioning service - RBAC on globally distributed steroids.
These software systems originally confused me because they did not explain to me as a developer that these are "frameworks". They are not "libraries" that are just plug-and-play.
If you want to sell a "framework", it's always going to be harder, because the customer has to conform their mental model and intersectionality to the framework. And that is a costly endeavour as it may be incompatible with their existing structure. You just can't easily bolt on a framework (compared to a library) after your software is already developed.
Libraries are inherently easier to just integrate, because they are self-contained. But self-contained systems cannot solve problems like security intersectionality.
Therefore frameworks work best when the client/customer has chosen it from the very beginning, the tradeoff is that if the framework turns out to be incorrect or faulty, it's always going to be alot more difficult to move away from it, because the framework structure imposes deep path dependency on software development, and the larger your software becomes, the more the framework is deeply embedded.
This is why frameworks are chosen after they have broad community/industry acceptance, because the more widely deployeda framework is, the more likey the framework has arrived at a generalisable non-leaky abstraction. On the other hand, this could also be because the framework just gains hacks after hacks due to legacy, and it ends up working only for the lowest commmon denominator or becomes a giant ball of complexity that handles every edgecase.
To summarise, in application to PK. If PK wants to solve these 2 problems, it would need to act like a framework.
I wonder if there's a generic systems engineering terminology for the difference between library-like systems and framework-like systems.
This issue has become a more generic research/design issue now, as I'm exploring the intersection of a few ideas. There's a few more axes to consider:
Some competing solutions are firmly on the server side. Most SAAS software in this space are usually server side. For example Hashicorp's Vault and Ory solutions above are all server side solutions.
Password managers are generally client side solutions, the users of the password manager are not the web services, but the users of those web services.
There are also orchestration side solutions, these are govern the interaction between client side and server side acting like middlemen. These are the hardest solutions to deploy because they require consent from both client and server side. However you can imagine inside a corporation, there can be a top-down plan to run an orchestration side solution internally between different sub-systems some acting as client and some acting as server. Examples include kubernetes and also hashicorp vault.
A client side solution needs to optimise for non-technical users and end-user platforms like mobile phones, browsers, desktops, laptops, human to machine communication and GUIs. Server side solutions needs to optimise for machine to machine communication, automation, availability, APIs... etc.
PK can be client side, and it can be server side, it could also be orchestration side. But are there tradeoffs we are making by trying to be all sides? Does it also make sense to cover all sides? Perhaps we should focus on a particular side as a beachhead/lodgement before tackling all the other sides.
Decentralised solutions implies a distributed system.
Distributed systems don't imply decentralised system.
A software can be distributed but not centralised.
Server side solutions are often distributed because of the demands of scale, look at Hashicorp Vault's recommendation for 3+ vault nodes. However they are often not decentralised, because server side solutions tend to be used by large actors who want centralised control.
Client side solutions are often non-distributed but benefit from decentralisation.
For decentralisation to benefit server side solutions, the users of the server side solution must trade away control in return for participation in shared contexts, where value (non-zero sum value) can be unlocked through absolute and comparative advantage. This is goal of free-market economics. See the transition from mercantilism to free-market/laissez-faire economics.
Centralisation will always exist, and so will decentralisation, it all depends on context. Even in a free market, corporations exist as little centralised fiefdoms.
Decentralised systems can be used in a centralised way. In terms of software complexity, any time software becomes distributed, and then becomes decentralised, it increases the order of complexity of implementation.
PK is a decentralised and distributed system. However there are tradeoffs in its distributed system mechanics that currently make it less appealing for server side solutions. We need to investigate this.
Users are not always the same people as who pays for the software solution. This is particularly true when building for centralised systems. Centralisation primarily means that there will be a separation of duties between owners and operators. This is what creates the moral hazard or perverse incentives that leads to class struggle.
This impacts software solution incentives, because one would naively think that software should be written for the users. But if the funding for software development come from the payers, then the features are written to benefit the payers, and not the users. When payers and users are aligned (or the same people), then there's no conflict. But when they are separated, then there will be conflict.
Users can never be neglected however, because because if the gap between the owners and operators increases, eventually the internal contradiction grows to the point where the perceived benefit of a solution becomes divorced from the real benefits of the solution, such that the solution will be replaced by something that closes the gap better.
When selling to centralised institutions we would have to be aware of this gap, while focusing on decentralisation we would have less misaligned incentives.
Investigated graphql, it's still not sufficient for what we need. Our RPC mechanism needs to support arbitrary streaming, and graphql just doesn't have any client side streaming or bidirectional streaming capabilities.
I was thinking that at the end of all of it. We could just have a plain old stream connection that we send and receive JSON messages over (or protobuf messages), and just make use of any stream based connection.
At the bottom webrtc data channel, wireguard or web transport can then be used to provide that reliable p2p stream, and whatever it does, must be capable of datagram punch packets. It should also provide some sort of muxing and demuxing, so that it's possible to create independent streams over the same connection. (As we do right now with grpc streams).
Another thought I had is that alot of these connections appear to be client-server. Then for any bidirectionality it has to be done on top again. If we get rid of all the complexity, and start from a foundation of a bidirectional connection, then it should be possible to both sides to create arbitrary streams on top that enable bidirectional RPC.
That would simplify the network architecture as it means any connection from Node 1 to Node 2 would enable Node 1 to call Node 2 and Node 2 to call Node 1.
Our own RPC can still work, all we need is a defined routing system, and then an encoding of messages like JSON RPC.
Stream manipulation needs to be multiplexed on top of a connection.
I'd define 3 layers of APIs:
So the final layer RPC is built on top of the streams. It will be 1 to 1 from RPC to stream. A unary call is a function that creates a stream, sends 1 message, and gets back 1 message.
A streaming call is a function that creates a stream, sends N messages, and receives N messages.
In a way this is what GRPC was supposed to do, but it's specifying too much about the connection API, and it should "connection" agnostic. At the same time a connection is always client to server, the server has to open their own connection back to the client to send calls, but this should be unnecessary...
Maybe a better grpc https://github.com/deeplay-io/nice-grpc - then we swap out the underlying socket for a reliable socket provided by webtransport or webrtc.
On the topic of nodejs, originally electron process could call nodejs APIs directly without needing to send data to the nodejs process. This is now considered insecure. But I believe that this can be used securely, one just needs to be aware about cross site scripting and not use the "web browser", or the secure bridge. This means grpc can be used in nodejs without a problem. At the same time... it makes sense to eventually reduce the reliance on node-specific APIs too.
Websockets opens up PK client service to browser integration and anything else that supports websockets.
Most external libraries still expect HTTP based APIs though, and I think websocket based APIs is a bit rare.
HTTP API can support unary, client streaming, server streaming and duplex, except that it must always occur from client then server.
So unary is fine.
Client streaming means client can send multiple messages and get back 1 response.
Server streaming means client sends 1 request, then can get back multiple messages streamed.
Duplex means the client must send all, then receive all.
This means there are edge cases where the HTTP API won't be able to perform, which is anything that is interleaved IO between client and server.
Where we enable a websocket server we can also produce a http server, all we have to do is present a stream object to the RPC system, but it will need to place some constraints on the semantics.
@tegefaulkes already in the case of RPC, the client side has to send at least 1 message before it attempt to await for a response from the server side, and this is because the routing and triggering of the handler only occurs upon the first message. This constraint should be embedded into the system by making our stream throw an exception if you attempt to await for a response from the server before first sending something to the stream.
This is a RPC issue, so it should be added into the middleware stack, you could call it a sort of "traffic light" system.
I'm not sure how we could tell if the user is waiting for an output before sending anything. I guess we'd have to use a proxy for that? But do we really want to check for that?
It's possible I could await output before writing anything and not actually block any writes. I've used this in a test where I use an async arrow function to create a promise that resolves when the reading ends. Then I start writing to the stream. In this case the output is being awaited before any messages are being sent and it's a valid thing to do.
I don't really think we need something here to prevent a possible deadlock. I'm not sure there's a clean way to address it. Deadlocks like this can happen a bunch of ways in our code and we already deal with it without problems.
It is possible to hook into a write, you just need to know when the first write has occurred and flip a boolean. This is doable on the stream callbacks.
Sure, but it's still possible to attempt a read before writing by using promises. The following will attempt a read before writing but not result in a dead lock.
const consume = (async () =< {
for await (const _ of readable) {
// do nothing, only consume stream
}
})()
//do writes here
await writer.write('something');
await consume;
I see, so you're saying that in some cases, it can be legitimate for something to start reading, while something else concurrently writes. In that case, yea it doesn't make sense to throw an exception on read if a write hasn't been done... It is still a potential footgun though. Could result in promise deadlock if one isn't aware of this.
Created by @CMCDragonkai
Once we reintroduce OAuth2 into Polykey, because our OAuth2 requirements are pretty simple, it would be beneficial to implement it from scratch to reduce dependency requirements.
We have already implemented an Oauth2 client side from scratch in MR 141.
This would mean these dependencies we can get rid of:
Actually @CMCDragonkai is in favour of getting rid of password related libraries entirely so that we can be much lighterweight.
Like all of these:
In our usage of OAuth2 server-side, we would expect only a 2-legged flow. For example a "client-credentials-flow": https://docs.microsoft.com/en-us/linkedin/shared/authentication/client-credentials-flow