mattrglobal / draft-looker-oauth-client-id-scheme

draft-looker-oauth-client-id-scheme
Other
1 stars 0 forks source link

Add a security consideration for Server Side Request Forgey (SSRF) #11

Open CheariX opened 1 year ago

CheariX commented 1 year ago

The current draft should discuss and provide mitigation strategies for SSRF.

Currently, I see the following issues:

Attack Vectors

  1. Client Metadata Response
   "logo_uri": "https://client.example.com/logo.png",
    "jwks_uri": "https://client.example.com/my_public_keys.jwks",

These parameters could force an AS calling untrusted URLs, such as private IPs / internal networks, known as SSRF.

  1. Authorization Request Using Client Discovery
client_id=https%3A%2F%2Fclient.example.com
    &client_discovery=true

An attacker could abuse client_id (which is an URL in that case) and client_discovery to farce ans AS calling untrusted URLs, such as private IPs / internal networks, known as SSRF.

  1. Token Request Using Client Discovery

same as above.

Mitigation:

Preventing SSRF is hard and complex, since we exactly want the AS to gather a client's information via an unknown URL. My suggestion is to add a discussion about these threats in the security considerations. We could also deny calling of private IPs (https://datatracker.ietf.org/doc/html/rfc1918#section-3, IPv6 TBD) and require a domain name instead of an IP address. Also, we should define "https" as the only allowed protocol (no javascript:, data:, whatever ...).

tplooker commented 1 year ago

Thanks @CheariX I believe it is worthwhile having some security consideration w.r.t this topic.

Section 2 already contains this language w.r.t the client_uri value.

The value of this field MUST be a URI as defined in RFC3986 [RFC3986] with a scheme component that MUST be https, a host component, and optionally, port and path components and no query or fragment components. Additionally, host names MUST be domain names or a loopback interface and MUST NOT be IPv4 or IPv6 addresses except for IPv4 127.0.0.1 or IPv6 (::1).

But I believe what you are suggesting is that ANY uri resolved from the metadata document should have such constraints? If so I think this would work in most cases but could also be overly prescriptive as a general rule, for example redirect uri's often use custom uri schemes which would prevent the general rule of enforcing "https", would you agree?

CheariX commented 1 year ago

Section 2 already contains this language w.r.t the client_uri value.

Yes. It's good to have it there. However:

  1. I don't see any reason for allowing private IPs and loopback adresses, such as 127.1 or ::1. Did I miss anything? Is there a valid use-case that I'm unaware of? If allowed, attacker could use these IPs, for example, to scan what services are running on that server (information disclosure via port scanning). I think, we one should better prohibit all private IPs.

  2. I'd like to see the term "Serve-Side Request Forgery" in the security considerations. For example. we could point to OWASP to help developers/administrators to be aware of such issue. They also recommend to prohibit access to internal networks. I think that this point is not clear for everyone by reading section 2.

About the https scheme: I know that some clients use custom schemes (e.g., Googles uses storagerelay://). But allowing arbitrary schmes may result in severe vulnerabilities (e.g., we found and javascript:// Bug in Keycloak leading to XSS). So I'm not a big fan of this point and think it's safer to disallow it in such dynamic use-cases like client discovery.

tplooker commented 1 year ago

I don't see any reason for allowing private IPs and loopback adresses, such as 127.1 or ::1. Did I miss anything? Is there a valid use-case that I'm unaware of? If allowed, attacker could use these IPs, for example, to scan what services are running on that server (information disclosure via port scanning). I think, we one should better prohibit all private IPs.

I'd like to seek further feedback from the wider OAuth2 WG about how much more prescriptive we want to be here, I appreciate that should at a minimum be discussed in security considerations but a normative MUST may be too strong. For example in certain situations certainly during initial application development, being able to use a local host based URL does have some practical advantages.

I'd like to see the term "Serve-Side Request Forgery" in the security considerations. For example. we could point to OWASP to help developers/administrators to be aware of such issue. They also recommend to prohibit access to internal networks. I think that this point is not clear for everyone by reading section 2.

Yes this seems reasonable to me.

About the https scheme: I know that some clients use custom schemes (e.g., Googles uses storagerelay://). But allowing arbitrary schmes may result in severe vulnerabilities (e.g., we found and javascript:// Bug in Keycloak leading to XSS). So I'm not a big fan of this point and think it's safer to disallow it in such dynamic use-cases like client discovery.

I was more thinking about situations where say a client has a redirect_uri that uses a custom platform specific scheme e.g com.example.application://, could you propose how we could limit without excluding these cases? Perhaps a security recommendation that outlines common schemes that are problematic?

CheariX commented 1 year ago

I was more thinking about situations where say a client has a redirect_uri that uses a custom platform specific scheme e.g com.example.application://, could you propose how we could limit without excluding these cases? Perhaps a security recommendation that outlines common schemes that are problematic?

I took me some time to think about this. If we really want to support such custom schemes, it is hard to deal with "dangerous" schemes. AFAIK, there is no complete list. Working with denylists are known to be a weak mitigation (in contrast to allowlists). Since allowlists won't work if custom schemes should be supported, I think the best would be to recommend the developers in the security considerations to allow https-only schemes if applicable and otherwise make them aware of dangerous schemes such as javascript://.

tplooker commented 1 year ago

@CheariX agreed also given the client's metadata document is itself an extensible data model, meaning that it may feature new metadata elements in future it would be hard to apply a blanket rule that applies to all uri's that may occur in this discovery document. I believe instead we should add a security consideration to the document that describes this.

Also please see #16 that we have merged which I think at least partially addresses one vector for SSRF.

tplooker commented 1 year ago

Further addressed in https://github.com/mattrglobal/draft-looker-oauth-client-id-scheme/commit/a9e84e458e13d3f0a842556982f3ed009c0a3193