Closed KenMacD closed 4 years ago
I think the correct way to do this is modifying the application and installing the modified application. I don't think the OS should offer this since it violates the way the security model is supposed to work.
Modifying the application and installing that keeps the security model intact. We really can't allow arbitrarily hooking applications that are not marked as debuggable and if you're capable of modifying them to mark them as debuggable and signing them with a new key, it seems more sensible to make the changes to the apk instead of dynamically having the OS modify their code.
Okay. I'll give this some more thought. Repacking would probably be complicated by tamper detection, and maybe key management. Not that that's a reason to put it in the OS of course. Also redistribution of patched apps are much harder to verify than a distributed patch set.
I'm still not entirely convinced that allowing some type of secure access to process startup is an entirely bad idea... but of course you're much closer to it than I am.
An app could also detect hooking or detect OSes that mess with it and go out of the way to break it on those.
What is "secure access" to application startup? It shouldn't be possible for an attacker who has gained even some minor level of control of the UI layer to inject code into apps. They could then steal all the data from the apps, run arbitrary malicious code, etc. It's a severe violation of the security model.
Software tamper detection by the app can be bypassed in the same way: modifying the app's code. I doubt that many apps perform that kind of tamper detection. You could extend your hooking to show them a different version of the app's code and signature. The only way the app can win is using hardware-based attestation. If we did what you said, hardware-based attestation would no longer work for Auditor either... we are free to turn attestation into a permission, etc. but it really makes no difference because apps can just refuse to run without it similarly to them being able to refuse to run on an OS breaking the security model in this way.
hardware-based attestation would no longer work for Auditor either
Specifically, Auditor could no longer trust the code of Auditor running on another device based on chaining trust to the app through the OS. It does that in order to perform software-based checks that are meaningful within the app's security model, i.e. if the OS has not been deeply compromised by an attacker during the current boot, then they can't fake those software checks. This allows the software checks to supplement the hardware checks, rather than only having basic information from hardware without a way to analyze the OS configuration, installed apps and other state. Auditor currently doesn't do much of this, largely because it hardly has access to any of it as a regular app and we haven't added support for it being a device manager or having special support in GrapheneOS via being a privileged app, etc. Trying to keep it portable and accomplish as much as possible without OS integration or it being a device manager 1st.
Also, consider apps taking great care to protect their data including providing fine-grained encryption at the app level rather than it being accessible when the device is unlocked. Those kinds of fine-grained controls are seriously hurt by allowing even temporary control over the UI to result in persistent compromise of apps via code injection. It just doesn't fit into the security model. Can't allow stuff like that without losing all kinds of important security properties, etc.
Going to close this since it's really not on the table as something we can do in regular production builds. This kind of thing can be supported in userdebug builds, and is essentially indirectly supported via userdebug builds via adb shell
. In a userdebug build, every app is considered debuggable, including release builds of apps. This means you can run-as
to run code within the sandbox of any app. There's also adb root
and su
for adb shell
(not usable elsewhere) to transition to (mostly) uncontained root. It throws away a huge part of the security model tied to protecting apps, protecting against persistent compromise, protecting user's against being tricked into compromising themselves (particularly in ways they can't undo), etc.
What is "secure access" to application startup? It shouldn't be possible for an attacker who has gained even some minor level of control of the UI layer to inject code into apps. They could then steal all the data from the apps, run arbitrary malicious code, etc. It's a severe violation of the security model.
For sure. My naive view would be this would be some type of package that was signed by an off-device key and verified by the OS. I wouldn't see this as anything that should be configurable during the current boot or allow persistent access to an attacker (at least in the case that the key is protected).
In short I want a system that protects the OS from attackers, but not one that protects the apps from me. I see a lot of value in being able to modify the flow of an application without giving up a ton of security in using debug builds. For example using DNS based adblocking works okay for now, but DoH will likely make that useless before too long.
I get that this may be out of scope with the current goals. I see in a comment you made online that powerusers aren't the target audience. I guess if it's something I want to do I'll have to use my key for the OS as well. Thanks for the info.
Antivirus-type blacklisting / enumerating badness approaches are not the approach of GrapheneOS and we won't make serious sacrifices to try propping up approaches that are fundamentally bad and unworkable + not really relevant to long-term privacy and security approaches. If you don't want an app to have data, don't let it have access to the data.
I also don't really see how modifying the apks is in any way worse or less capable than the OS dynamically modifying code at runtime. Either way, the app can wage a battle of trying to detect it and can eventually resort to requiring hardware-based attestation to win instead of playing a game of cat and mouse where their detection can always be worked around. The proper approach is modifying the apps, not having the OS modify them dynamically in a way that compromises the security model.
Apps are fully capable of blacklisting running on GrapheneOS, and we are not playing any games to work around that kind of thing. If you're concerned about apps detecting this kind of thing, there's really no point, seeing as they'll just blacklist running on something other than the stock OS with SafetyNet and/or hardware-based attestation. If they already care about that kind of thing, they're probably already doing it. If an app uses SafetyNet, it isn't compatible with GrapheneOS simply due to the hard dependency on Play Services before getting into the fact that it isn't the stock OS and won't pass. What makes you think an app that goes out of the way to detect tampering with the apk would permit running on an arbitrary OS with features that dynamically modify their code, etc? I don't see how this would have any use case. It's simply not useful, and has substantial sacrifices. Why don't you just take the approach of modifying apks, which is portable across operating systems and has no security sacrifice? I don't get why anything like this would be needed. I can't see any advantage to it, only downsides.
The app could simply refuse to run if it can't connect to those servers, requiring you to more deeply modify it once again in an escalating war with the app which is far better suited to you dealing with using modification of the apk, particularly since there are already frameworks for doing that. Take the approach that actually makes sense of modifying the code and then installing the modified app. Don't blame GrapheneOS for not being suited to your niche. Rather, you want a bad approach to this that doesn't make sense. We work on improving security and doing things right, not rolling back security and taking shortcuts / hacks instead of the proper path for accomplishing goals. The proper path to modifying the code for apps is to modify the apk including working around whatever anti-tampering exists there. It doesn't make sense to move that responsibility into the OS with an ever escalating war of hacks / workarounds costing substantial security and breaking the security model relied upon by many apps like Auditor, Signal and many other apps. It doesn't belong in a privacy / security-focused OS. It belongs in a frameworks for modifying apks.
I think it's a very misguided way to try improving privacy regardless, but if you are going to do it, just do it properly and accept that what you are doing is modifying the code and the best way to do that is doing it directly and properly rather than expecting an escalating series of hacks supported by the OS. I really do not understand what it offers over modifying the apk. Can you list one advantage from developing / maintaining this and making substantial security sacrifices for it, including making it far easier to backdoor / persist access as an attacker? I can't see any. It doesn't make sense as a whole.
In short I want a system that protects the OS from attackers, but not one that protects the apps from me. I see a lot of value in being able to modify the flow of an application without giving up a ton of security in using debug builds. For example using DNS based adblocking works okay for now, but DoH will likely make that useless before too long.
You're completely capable of modifying the apk including using frameworks for doing that rather than expecting the OS to include ad-hoc, incomplete and costly (security, maintenance, loss of compatibility with apps) hacks/workarounds providing a very incomplete approach. Unlike doing this by modifying the app, doing it via the OS is quickly and permanently going to lose any battle with the app over detection since it is not in a position to play an agile war with app developers over stuff like this. Once they add detection of GrapheneOS or detection of that kind of hooking, it's over, and the app is unavailable for anyone on GrapheneOS to use. Instead, you could modify the apk, and override whatever anti-tampering is there where you have full control over the code that runs instead of just very incomplete, ad-hoc hooking mechanisms in the OS with their substantial cost (which is completely incompatible with the goals of GrapheneOS). Do it right, and don't push the responsibility for something that you're fully capable of doing with the apk onto the OS. Don't claim that the OS just doesn't care about what you want / your niche when you're just approaching an ultimately unworkable task in a particularly bad way instead of one where you actually have the control and flexibility that you're going to need based on the problems you presented. You're specifically talking about apps going out of the way to bypass the enumerating badness / blacklisting you want to do, so you need to have full control over the apk. The OS doesn't care what key an app is signed with or that you've modified it. It gives you full power to modify any app as much as you want. The only thing that stops you is hardware-based attestation and there is nothing the OS can do to avoid that since if we just stopped the app from using it, it wouldn't work anyway, since they'd treat that as failure. I'm quite certain that if they're using hardware-based attestation they would not be explicitly whitelisting GrapheneOS in the first place and if we did stuff like this it wouldn't be on the table to get whitelisted at all. In general, this just doesn't make sense.
I would like a way to hook methods in applications in a secure way. I don't know what the design of this would be.
This would allow patching out things like advertising classes, while not allowing new modules to be installed that could collect some other data.