hippware / rn-chat

MIT License
5 stars 0 forks source link

[dev] Obfuscate wocky magic key #3104

Closed southerneer closed 5 years ago

southerneer commented 5 years ago

@aksonov currently we're storing the master key for generating wocky tokens in the generateWockyToken method in wocky-client. We should probably be storing that sensitive data somewhere more secure on the native side. My initial thought is to store it in a plist file (or equivalent for Android), access it with rn-native-env. and pass it to wocky-client on every app start. What do you think?

aksonov commented 5 years ago

Agree

On 27 Nov 2018, at 23:37, Eric Kirkham notifications@github.com wrote:

@aksonov currently we're storing the master key for generating wocky tokens in the generateWockyToken method in wocky-client. We should probably be storing that sensitive data somewhere more secure on the native side. My initial thought is to store it in a plist file (or equivalent for Android), access it with rn-native-env. and pass it to wocky-client on every app start. What do you think?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

bengtan commented 5 years ago

Random thought: Store half of the key in XCode. Store the other half in javascript source as a constant. Concatenate at run time.

aksonov commented 5 years ago

Let’s revisit this ticket during android development - we need to find common react-native module to access to both ios/Android native parts..

On 13 Dec 2018, at 09:19, Beng Tan notifications@github.com wrote:

Random thought: Store half of the key in XCode. Store the other half in javascript source as a constant. Concatenate at run time.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hippware/rn-chat/issues/3104#issuecomment-446880557, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQpcde-xZrp3Xw-5aV0wEavKlPRpSA7ks5u4g2GgaJpZM4Y2ezB.

bengtan commented 5 years ago

I think there's an argument to publish an iOS version 'soon-ish' in order to deploy the server side next code to Staging (and Prod) ... before the Android work is finished.

Not certain yet, just a possibility.

So ... are we okay if an iOS version gets published before this ticket is done?

aksonov commented 5 years ago

I'm fine with both ways. Another thought - plist storage is NOT enough, because it is easily viewable. We need to use crypted keychain. Just found following module https://github.com/oblador/react-native-keychain - should work on Android too

bengtan commented 5 years ago

Another thought - plist storage is NOT enough, because it is easily viewable

By the same reasoning, Android's AndroidManifest is easily viewable as well.

We need to use crypted keychain. Just found following module https://github.com/oblador/react-native-keychain - should work on Android too

I had a look at this.

It's used for storing data that's obtained from other sources ie. user data entry, or retrieved from the server.

I don't see how react-native-keychain would be used in our case where we have a constant that's hardcoded in source code.

Assuming that react-native-keychain isn't suitable, and we don't want to store the magic key in plist/AndroidManifest, then I can think of two possible solutions:

  1. Just obfuscate it in javascript code.
  2. Retrieve it from the server side.

... but 2 seems like a lot of work for not much benefit.

bengtan commented 5 years ago

Thought some more about this.

Thinking aloud ... what if we flip the question around? Instead of securing the magic key, what if the rest of the system was designed so it doesn't matter if bad people know the magic key?

If bad people know the magic key, they can forge client-side JWTs but they can't use it to log in because they can't forge the firebase token.

So ... maybe the magic key doesn't need securing?

southerneer commented 5 years ago

Maybe? @toland or @bernardd what do you guys think about Beng's theory just above?

toland commented 5 years ago

If bad people know the magic key, they can forge client-side JWTs but they can't use it to log in because they can't forge the firebase token.

So ... maybe the magic key doesn't need securing?

That defeats the purpose of generating the JWT on the client. We wanted some way to be reasonably sure that the software attempting authentication was, in fact, our client. There are basically two ways to do this: a shared secret and challenge-response authentication. The former was easier to implement, though less secure than the latter, so that is the route we went.

Failing to secure/obfuscate the shared secret would negate any benefit we get from the client generated JWT. If we don't care about cryptographic verification, then we can just get rid of the client-generated JWT and just use the Firebase token; which would simplify everything at the cost of security.

bengtan commented 5 years ago

I've been reading about obfuscation of secrets in javascript. The consensus (which I agree with) seems to be that obfuscation is just that ... obfuscation. It's not going to be foolproof, and any skilled attacker will be able to extract the client-side key from the client side app/package/bundle/whatever.

I don't mind doing obfuscation ... so long as everyone is aware of the limitations of obfuscation ie. it's not really secure.

southerneer commented 5 years ago

I'm fine with obfuscation. We do still have the option of securing the key on the native side with the caveat that it would take more time/effort and would require a separate solution for Android.

bernardd commented 5 years ago

Here's my question(s): what is the damage if the secret key gets into the "wrong hands"? Is it that they can impersonate the client? If so, what's the potential damage from that? If the answer is "meh, not much really" then obfuscation is probably fine, at least for now. If, however, there's some serious actual damage that could be done by getting the key then we really need to go down the challenge-response road.

I'd hope that we've written the server in such a way that it doesn't matter whether it's a "real" client or someone else connecting to us - they shouldn't be able to do anything more than the real client. If there's exceptions to that, we should be seriously thinking about how to mitigate them.

toland commented 5 years ago

I don't mind doing obfuscation ... so long as everyone is aware of the limitations of obfuscation ie. it's not really secure.

"Secure" is a spectrum, not a binary attribute.

Here's my question(s): what is the damage if the secret key gets into the "wrong hands"?

At this point, it isn't significant. Someone can impersonate the client, but they would still need the Firebase token to actually authenticate. I view client verification as a layer of defense in depth. We can easily add new secrets on the server side, if necessary. In the worst case scenario, we can revoke a secret if a really bad actor gets ahold of it. That would cause some disruption to the client, but it also may be the least bad option.

I've been reading about obfuscation of secrets in javascript. The consensus (which I agree with) seems to be that obfuscation is just that ... obfuscation.

Quite. A better option would be to compile the secret into the Obj-C code. Still obfuscation, but stronger obfuscation than the JS.

But keep in mind that this is running on an iPhone, not a browser. To de-obfuscate the Javascript secret, you have to get the app package off of the phone. It is doable, but it isn't necessarily easy. Certainly not as easy as getting JS out of the completely unprotected browser cache.

If, however, there's some serious actual damage that could be done by getting the key then we really need to go down the challenge-response road.

From a security perspective, I think we should always assume that a third party accessing our API has the potential to be disruptive. While I think we have done a reasonable job making the API secure, designing and testing a hardened API seems like a poor use of our time at this point. Ultimately, I think challenge-response is the right answer, but it is more than we really need right now.

bengtan commented 5 years ago

To summarise:

Everyone's fine with obfuscation. It would be nice to obfuscate by compiling it into native code (obj-C and java) ... we'll do it if it's easy. Otherwise we'll obfuscate in javascript.

southerneer commented 5 years ago

Doing a little digging on this, our JS is minified by default on non-dev builds and codepushes. It looks something like this:

...
n.convertBotPost=x,n.convertBot=h,n.convertNotification=g,n.convertNotifications=function(e){return e.map(g).filter(function(e){return e})},n.generateWockyToken=function(e){return d.default.jws.JWS.sign('HS512',{alg:'HS512',typ:'JWT'},e,{utf8:'0xszZmLxKWdYjvjXOxchnV+ttjVYkU1ieymigubkJZ9dqjnl7WPYLYqLhvC10TaH'})},n.assert=function(e,t){if(!e){if(t=t||'Assertion failed','undefined'!=typeof Error)throw new Error(t);throw t}}
...

...which is buried in 1458 lines of compacted code. It's not encrypted or obfuscated otherwise, but by the same token it's not very easy to find the magic key either. Is this good enough for now?

southerneer commented 5 years ago

As for "true" obfuscation, what's a good way to hide that string that isn't crackable by simply copying the code and dropping it into any online JS evaluator tool?

bengtan commented 5 years ago

Random idea: Split it into four substrings. Do Rot13 on one or two of them. Do some other reversible string operation on another. Reconstruct the key dynamically at runtime.

Or make up your own imaginative scheme.

toland commented 5 years ago

Random idea: Split it into four substrings.

This is still vulnerable to "simply copying the code and dropping it into any online JS evaluator tool." Whatever obfuscation scheme you come up with will be available to anyone who has access to the code.

That having been said, if you have thousands of lines of minified JS and the key and the code that uses/deobfuscates it are not close to each other, that kind of attack, while still doable, is less practical.

I think the only reasonably secure and practical solution is to include the key and/or the deobfuscation function in native code. That should be the medium-term objective. In the meantime, I don't think we need to get wrapped around the spokes of this issue.

bernardd commented 5 years ago

This is still vulnerable to "simply copying the code and dropping it into any online JS evaluator tool." Whatever obfuscation scheme you come up with will be available to anyone who has access to the code.

To take that further, we ultimately have to pass the key into a library function (jws.JWS.sign by the look of the above). If someone has debug access at that point, there's literally no way I can see to prevent them getting access to the key. It's simply got to be available so that it can be passed to the function. Find a function that looks like a signing function, slap a breakpoint there, and you're done.

The same, of course, is true of native code but the barrier to entry is a lot higher.

aksonov commented 5 years ago

I agree with @bernardd. Storing key at the native side doesn't solve anything because we are using javascript jws.JWS.sign function. So only way I see is to use iOS/android native sign functions, but it will require a lot of time (to inject native call inside wocky client and implement it for both iOS/Android)

bengtan commented 5 years ago

To summarise:

Everyone's fine with obfuscation. It would be nice to obfuscate by compiling it into native code (obj-C and java) ... we'll do it if it's easy. Otherwise we'll obfuscate in javascript.

Taking into account the most recent few comments, I think the consensus has changed to:

The key should be in native code. However, we should also do the signing in native code too even though that's more work.

aksonov commented 5 years ago

@bengtan It is a lot of work... Maybe I missed some past discussion, but why we can't generate JWT token on the server side to avoid storing of any secret keys on client-side? https://medium.com/vandium-software/5-easy-steps-to-understanding-json-web-tokens-jwt-1164c0adfcec

Client will pass firebase token to some 'login' method, login method will return JWT token and then this token will be used for GraphQL operations.

I don't know anything about any secret keys on client-side Facebook or Google use for their authentication procedures. Does Firebase client code have any secret key?

southerneer commented 5 years ago

@aksonov see @toland 's comment above: https://github.com/hippware/rn-chat/issues/3104#issuecomment-449112622

aksonov commented 5 years ago

I don’t see arguments there. I don’t think we should have stronger auth than google or Facebook

southerneer commented 5 years ago

Phil was just explaining why we need to generate the token on the client side in that comment. It ensures (to a reasonable degree) that the request is coming from our client. If we don't do that then anyone could send a valid Firebase token to the back-end. So we're basically doing 2 kinds of auth. 1) That it's a "real" person (the Firebase part) and 2) that it's our codebase making the request.

I agree that improving on Google/Facebook auth (part 1) is not worth pursuing, but part 2 does seem valuable and something we should strive for.

toland commented 5 years ago

Eric is correct. Having the client sign a JWT isn't about authenticating the user, it is about authenticating the client.

There are several reasons why we want to do this. The most important is that we are billing ourselves as a company that takes user privacy very seriously. In order to do that, we have to protect the user's data the best we can. It isn't enough to simply promise that we won't sell it.

By authenticating the client we make it harder to carry out certain kinds of probing attacks and man in the middle attacks. It doesn't guarantee anything, but security is often about making attacks difficult enough that would-be bad actors are dissuaded. The lock on your front door can be easily picked, but it dissuades casual intruders from just walking into your house.

bengtan commented 5 years ago

Here's something a bit unusual.

What if we obfuscate the key in WebAssembly? It would be nice to also do the signing in WebAssembly too but that's an aspiration (... maybe for another time).

A week or two back I had a look. It looks like there is a WebAssembly object in React Native, but that's as far as I got.

I'm not sure if this will actually work, but ... Are there any thoughts or objections? Or too early to say (everyone has to go read up on WebAssembly first?)?

bernardd commented 5 years ago

What if we obfuscate the key in WebAssembly?

I don't see what that gains us. First, it doesn't do anything to address the issue of sticking a breakpoint on the signing function, but even if we did the signing in wasm:

WebAssembly is designed to be pretty-printed in a textual format for debugging, testing, experimenting, optimizing, learning, teaching, and writing programs by hand. The textual format will be used when viewing the source of Wasm modules on the web.

That's exactly the opposite of obfuscation.

bengtan commented 5 years ago

Purely as a FYI to spark interest ...

From https://mbebenita.github.io/WasmExplorer/

Compiling

int testFunction(int* input, int length) {
  int sum = 0;
  for (int i = 0; i < length; ++i) {
    sum += input[i];
  }
  return sum;
}

gives the following wasm/js:

(module
 (table 0 anyfunc)
 (memory $0 1)
 (export "memory" (memory $0))
 (export "_Z12testFunctionPii" (func $_Z12testFunctionPii))
 (func $_Z12testFunctionPii (; 0 ;) (param $0 i32) (param $1 i32) (result i32)
  (local $2 i32)
  (set_local $2
   (i32.const 0)
  )
  (block $label$0
   (br_if $label$0
    (i32.lt_s
     (get_local $1)
     (i32.const 1)
    )
   )
   (loop $label$1
    (set_local $2
     (i32.add
      (i32.load
       (get_local $0)
      )
      (get_local $2)
     )
    )
    (set_local $0
     (i32.add
      (get_local $0)
      (i32.const 4)
     )
    )
    (br_if $label$1
     (tee_local $1
      (i32.add
       (get_local $1)
       (i32.const -1)
      )
     )
    )
   )
  )
  (get_local $2)
 )
)

Compiling:

double fact(int i) {
  long long n = 1;
  for (;i > 0; i--) {
    n *= i;
  }
  return (double)n;
}

gives the following wasm/js:

(module
 (table 0 anyfunc)
 (memory $0 1)
 (export "memory" (memory $0))
 (export "_Z4facti" (func $_Z4facti))
 (func $_Z4facti (; 0 ;) (param $0 i32) (result f64)
  (local $1 i64)
  (local $2 i64)
  (block $label$0
   (br_if $label$0
    (i32.lt_s
     (get_local $0)
     (i32.const 1)
    )
   )
   (set_local $1
    (i64.add
     (i64.extend_s/i32
      (get_local $0)
     )
     (i64.const 1)
    )
   )
   (set_local $2
    (i64.const 1)
   )
   (loop $label$1
    (set_local $2
     (i64.mul
      (get_local $2)
      (tee_local $1
       (i64.add
        (get_local $1)
        (i64.const -1)
       )
      )
     )
    )
    (br_if $label$1
     (i64.gt_s
      (get_local $1)
      (i64.const 1)
     )
    )
   )
   (return
    (f64.convert_s/i64
     (get_local $2)
    )
   )
  )
  (f64.const 1)
 )
)
bernardd commented 5 years ago

It's certainly less trivially readable than JS, but the fact remains that we have three options for the actual signing that I can think of: 1) Call back into JS, in which case this solves nothing, as noted above 2) Wait for a wasm library that supports signing (no idea if there is one, but it will still have a signing function which can still be very easily found and breakpointed, so we're almost back where we are with JS, albeit with perhaps slightly less well known libraries) 3) Write our own crypto. (File under: "DO NOT WRITE YOUR OWN CRYPTO")

bengtan commented 5 years ago

Please disregard my comments above about WebAssembly. I've gone a bit cold on it.


This topic came up in a recent back-end meeting. It became apparent that, whilst there are things we'd like to do security/obfuscation-wise, it's possibly?/probably? not practical in the immediate short term.

So we have the following options, in order of increasing effort required:

  1. Leave the code as-is. No obfsucation
  2. Obfuscate the key a little bit in javascript with a minimum amount of effort.
  3. Store the key in native code (with some obfuscation?). Call into native code from JS to retrieve it at run time. Do the signing in JS.
  4. Store the key in native code. Do the signing in native code. JS code calls into native code with the JWT fields and returns a JWT token.

The decision of which one to do will depend on the circumstances and the trade-offs at the time. No decision has been made yet.

Hence, I'm also going to move this to 'Ready' since we're not implementing it straight away.

bengtan commented 5 years ago

Related, fyi:

Hiding Secrets in Android Apps https://rammic.github.io/2015/07/28/hiding-secrets-in-android-apps/

bengtan commented 5 years ago

There's been some discussion amongst the front-end team on and off over the last week or two.

This is what we propose to do:

  1. Find a React Native component which does JWT signing in native code.
  2. Fork/Patch/Hardcode the native source code so it always signs with the wocky magic key.
  3. Then, for example, the app calls into the component with a fake/decoy key.

This means that:

This is presuming that we find a suitable RN component. We've already found one or two which look interesting, but need to dig further.

bengtan commented 5 years ago

Moving to Up Next and assigning.

@aksonov: If you want help with modifying native Android RN modules, just gimme a yell. I can do that part.

bengtan commented 5 years ago

bengtan to work on Android/java changes if necessary.

bengtan commented 5 years ago

Hmmmm ... I don't think this needs to go to QA. It's more of an internal dev change.

If it doesn't work, QA won't be able to log in. I'm sure they'll notice and post a ticket if that happens.

Hence, closing.