firebase / firebase-ios-sdk

Firebase SDK for Apple App Development
https://firebase.google.com
Apache License 2.0
5.65k stars 1.48k forks source link

Simplify SSL pinning #9949

Closed r-dev-limited closed 2 years ago

r-dev-limited commented 2 years ago

Feature proposal

After very interesting discussion on Slack Firebase Support Channel I've realised that security presented in firebase documentation of Cloud Functions(callables) does not explain implications of man-in-the-middle attacks.

People experienced real attacks, intercepting HTTPs Callable functions and changing payload using Charles Proxy. I was NOT aware that that is even possible having HTTPS connection (with SSL provided by Google) with App Check enabled using native iOS SDK.

However, it seems that there is no SSL pinning present by default, there is no documentation or easy way of adding it and not even mention documentation that people should be aware of such attack.

@ybrikeeg is willing to share detail of attack and provide demo.

I am just trying to get some support, guidance and also transparency in documentation.

Can someone please also point to workarounds ? I assume there is a way to create Google Managed SSL cert, assign it to CF and then somehow pin it in iOS ?!

r-dev-limited commented 2 years ago

Example about how to intercept encrypted SSL communication: https://www.raywenderlich.com/21931256-charles-proxy-tutorial-for-ios#toc-anchor-010

andrewheard commented 2 years ago

Hi @r-dev-limited and @ybrikeeg, thanks for your questions. Would you mind sharing more details about the scenario/attack vector that you're concerned about?

Charles Proxy, and other similar applications, do allow you to essentially man-in-the-middle attack yourself, which makes them great for debugging or investigating networking in apps on your own device. However, Charles Proxy does require you to install a Profile that trusts the Charles Proxy Certificate Authority (CA) and also to trust the Charles Proxy SSL Certificate in order to intercept and/or change HTTP request/response payloads on your own device.

Without a profile installed, like the one provided by Charles Proxy, iOS will only trust leaf certificates signed using Apple's list of trusted root certificates, assuming NSAppTransportSecurity hasn't been explicitly disabled in the Info.plist of your app. Details about your specific use-case would be very helpful since certificate pinning, if not implemented very carefully, can be quite risky. For example, if you need to revoke a leaf certificate, or it expires, and you haven't pinned a backup certificate then your app would not be able to communicate with your server (Cloud Functions) without releasing an updated app and waiting for users to install it.

r-dev-limited commented 2 years ago

Thank you for quick reply @andrewheard

When I heard about it, first use case / usage of this which came to my mind was cheat apps for iOS without even requiring jailbreak.

When you can "just" install and "verify" charles certificate (which obviously for cheaters or people willing to jailbreak their devices is not really an issue) and can modify requests without noticing...that seem as big security risk to me.

I always thought that using SSL, Appcheck AND CloudFunction callables ensures integrity of message and auth status and origin.

Now it seems that for any important usage you have to encrypt is yourself and have server key to decrypt it. (e2e encryption) or encrypt some form of hash of payload and make sure its same on both ends...i dont know...

But I def though that this works out of the box

ybrikeeg commented 2 years ago

Thank you @r-dev-limited for creating the post.

I can give a scenario we are seeing.

When the user gives us iOS device location, we send the location (city, state, country) to our backend using an https cloud function. We rely a lot on this field for fraud purposes (we are a surveying app and need to trust this field since we send users real money for each survey). If the country is "United States", then they get a different tier of surveys than if their country is "Nigeria" for obvious reasons. However, a man in the middle can modify the cloud function payload to change country to whatever they want. So the trust is lost between the physical device having its location and our server receiving it.

I would expect AppCheck to protect against this. Until a solution is implemented (SSL pinning on Firebase's end, encryption on the developer's ends), our backend cannot trust any data the client passes to it. This is a pretty big security vulnerability.

morganchen12 commented 2 years ago

@ybrikeeg please correct me if I'm wrong: it sounds like you're talking about a different security concern than one that would be solved by SSL pinning, which is used to prevent an attacker from intercepting an app request and then sending a response back to the app impersonating your service.

In the case where a bad client wants to send abusive requests to your server, SSL pinning isn't applicable because the bad client is not concerned with validating your backend service's responses. App Check can raise the cost of client abuse by requiring that the bad client obtain some device-level validation in order to prevent an attacker from (for example) imitating multiple clients on one device, but a sufficiently determined bad actor can circumvent that by doing something more expensive like reverse-engineering your app and running it on several phones at once. Similarly, your client traffic doesn't need to be intercepted by a MITM to report the wrong location--a user could create a jailbreak tweak of your app that allows users to select any location in a drop-down to populate the country field, or you could accidentally write a bug in your real app where country is not set correctly. This is why clients are inherently untrustworthy and why we created Security Rules, though they won't help in this use case.

What your app really needs is a more robust way to determine what survey region should be associated with a particular user.

we send users real money for each survey

You could potentially derive the survey region from your user's financial information, but this is outside my area of expertise.

r-dev-limited commented 2 years ago

@morganchen12 I agree with you comment, but that has nothing to do with underlaying issue.

Payload is not secure, even when users by reading docs believe so. We all heard about encryption, checking tokens, setting up rules, adding appcheck to the mix...

but if you can simple install software and proxy it through, change content and everything looks cool, that seem concerning to me too.

Specially if we are guided to use CF for "dangerous/priviledged" operations

mortenbekditlevsen commented 2 years ago

When analyzing attack vectors it's often good to imagine who the attacker is.

When creating a callable you create an api for a (possibly authenticated) user to use. The user may call that callable using any client they like - and with any data they like.

So pinning the certificate in a specific client does not prevent the user to just call the callable with a client built by themselves.

So if the attacker is the end user, it doesn't actually provide additional security to pin the certificate.

If the attacker is a 'middle man', they still need access to the device and to unlocking that device in order to install a certificate to allow proxying through something like Charles. So if the attacker has that level of access to the device, they are basically indistinguishable from the end user. So again the pinning doesn't really provide extra security.

ybrikeeg commented 2 years ago

@mortenbekditlevsen Do you have any solutions to prevent this MITM attack? If what you're saying is we should assume any client calling our cloud functions can be an attacker, how do we differentiate traffic from an attacker vs a user?

I do agree with @r-dev-limited, this vulnerability should be made more aware to developers using CF and AppCheck. Even with all security measures in place that the Firebase docs describe, all traffic can be and should be assumed to originate from an attacker.

mortenbekditlevsen commented 2 years ago

Hi @ybrikeeg, Well, I think I was more describing a 'man-at-the-very-origin-of-the-payload' situation. In the communication between Alice and Bob here, Alice is the end user and Bob is the callable function. Alice is the sender of the payload, so Alice modifying the payload is not an attack. It's just Alice sending a different payload. Through firebase auth the Callable system can for instance verify that Alice is Alice and perhaps that she is allowed to write data at a certain location in Firestore. But not that Alice only sends data that you want her to. So I guess your question is: can you (the app developer) trust Alice? I don't really see that you can. Alice is free to use the apis you give her as she pleases. So the fact that you - as the app developer - only intends for Alice to send specific content to the web service is not part of any security promise. If other servers that you control are the source of the data that Alice sends, then you could sign it and verify that on another server. But your app cannot sign any such data because that would require the app to contain some secret, that could then quite easily be extracted from the app.

Does that answer your questions?

t0mstah commented 2 years ago

@mortenbekditlevsen It seems a little imprudent that App Check doesn't do any sort of validation on the certificates sent by a client. In the case of this specific attack, Charles Proxy sends a self-signed certificate, which should be detectable.

How often do SSL certificates change on Firebase mobile clients? Our main worry with implementing certificate pinning is that we don't control the certificate Firebase assigns, so a change or expiration in a certificate would brick the app.

mortenbekditlevsen commented 2 years ago

@tfang17 I am not a Google employee, so I don't know how often certificates change on the servers. But the risk of simultaneously bricking all apps that use firebase due to changing certificates could be a good reason not to build in pinning. 😁

t0mstah commented 2 years ago

Ah sorry I thought you were. Alternative solution would be mTLS, but I don't have access to client certificates in Firebase Functions, as I believe HTTPS gets stripped to HTTP after passing through Google App Engine.

inlined commented 2 years ago

For better or worse, AppCheck for Cloud Functions does not work like AppCheck for the Realtime Database or Firestore. You have to manually verify the appcheck verification results in memory. Are you verifying that context.app is not undefined (event.app in the V2 API)? Without this you'll allow untrusted clients.

It's true that an attacker could create a very slow attack by proxying a valid request and then crafting a manual request with an appcheck token that has not yet expired (it's too CPU intensive to create one for every request, so they are valid for a limited period of time). This method won't scale though because the developer cannot create another app that mints these requests manually and a jailbroken iPhone that tries to instrument your app won't be able to generate an AppCheck token.

Certificate pinning may be an interesting addition to the Firebase platform, though it would be an SDK team and the Functions team (my team) would not provide value to this discussion. My personal thoughts on the matter are irrelevant, but I personally would want to make sure this is an opt-in advanced-only feature. It's not hard to find controversy regarding certificate pinning; even some certificate authorities don't want you doing it

mbleigh commented 2 years ago

I would add that it sounds like what you are really looking for is a way to absolutely trust the payload sent to you by a client. This is not and will never be possible. App Check is a security solution that ensures that a valid device was used to initiate a session at some point, not to make it impossible for a request to be forged (as by laboriously copying the generated token as described above).

There is no 100% method for protecting against malicious clients - if you depend on information from the client to be accurate it will always be forgeable at the edges. I would recommend using a blend of techniques (e.g. App Check plus geolocating the client IP plus potentially using other methods to obscure and complicate the region signaling process plus out-of-band things like payment information). It's a layered approach that will provide the most secure system, there is no silver bullet.

andrewheard commented 2 years ago

I completely agree with @mbleigh's recommendation to use a layered approach. Just to address @ybrikeeg and @r-dev-limited's specific scenario directly, I wanted to point out that even with SSL pinning and App Check, you can't trust the device location is 100% accurate.

The location of an iOS device can be easily spoofed without jailbreaking or modifying payloads by deploying another app to the device from Xcode and enabling Allow Location Simulation in the Scheme, providing whatever location you want. This causes CLLocationManager to report the simulated location system-wide. There may be techniques to guess when a location is simulated but this is out of my area of expertise. Michael suggested geolocating the client IP above -- although that could be spoofed with a VPN, it's a great example of layering on additional techniques.

r-dev-limited commented 2 years ago

@andrewheard we all agreed with that.

However all I am trying to understand from 'user' perspective is - whats the point of app check ?!

It can be bypassed, ssl communication decrypted, and with bit of knowledge you can write scripts which can do even DDOS attacks, payload forging etc...

So if anything, we should have some comprehensive guide (docs) with other options, maybe other google services to combine cloud functions with. Google Cloud Armor, Apigee, ?!

morganchen12 commented 2 years ago

We discussed this out-of-band with @ybrikeeg and team via their Cloud support tech, and came to a somewhat bespoke solution that works for their team.

@r-dev-limited, some brief answers to your outstanding questions:

However all I am trying to understand from 'user' perspective is - whats the point of app check ?!

App Check significantly increases the cost of creating abusive client requests by requiring an attestation provider to validate that the user is not fake, for example via captcha on web. App Check is not a robust solution for preventing MITM or DDoS attacks, nor does it prevent a real user from lying to you.

I'll mark this issue to auto-close, but please reach out to your Cloud TSE or comment here if you have further questions.

google-oss-bot commented 2 years ago

Hey @r-dev-limited. We need more information to resolve this issue but there hasn't been an update in 5 weekdays. I'm marking the issue as stale and if there are no new updates in the next 5 days I will close it automatically.

If you have more information that will help us get to the bottom of this, just add a comment!

google-oss-bot commented 2 years ago

Since there haven't been any recent updates here, I am going to close this issue.

@r-dev-limited if you're still experiencing this problem and want to continue the discussion just leave a comment here and we are happy to re-open this.

newbstudent commented 2 years ago

Check out how to do cert pinning for iOS at https://developer.apple.com/news/?id=g9ejcf8y

For the domain it should be https://your_firebase_project_URL.firebaseio.com (for Firestore at least)

I'd pin GTS Root R1, R2, R3 and R4 to that domain in case something happens to say just R1

List of all Google Trust Service certs is at https://pki.goog/repository/

Hope this helps! I'm only protecting my Firestore connections but you could protect Auth and Storage as well if you track down the domains for each service.