hasura / graphql-engine

Blazing fast, instant realtime GraphQL APIs on your DB with fine grained access control, also trigger webhooks on database events.
https://hasura.io
Apache License 2.0
31.14k stars 2.76k forks source link

Improve security for action handlers #5112

Open haf opened 4 years ago

haf commented 4 years ago

Description

Right now, securing webhooks/action handlers require you to share a plaintext key between them. There's no cryptographic support to prove shared knowledge of the key, without sending it.

Current behaviour

What's not so secure about the current behaviour

Suggested behaviour

Objections

How does this work?

  1. Your Hasura server has an Admin key
  2. Your action has a name
  3. You derive a per-action key, as such hmac(algo: sha256, key: admin secret as a byte array, message: action name as a utf8-encoded string as a byte array) which is then a pre-shared key that the action handler uses to authenticate the incoming web request
  4. When Hasura sends a HTTP request, it creates a header hmac that is the concatenation with newlines, of, the method, the complete URL, a request id, and the body's bytes
  5. When the action handler receives the request, the hmac header and the request-id header, it validates the request's sender by performing the MAC like above, with the pre-shared per-action key.

Handling key rotation

If you need to rotate the action handler key, Hasura could be extended with per-action variables, like it is now, but one that specifically is used to create a hmac header

If you happen to expose the handler key, you still cannot derive the admin key from it.

Alternatives suggested

From #4645

  1. Global ACTION_KEY: drawbacks include that you now cannot easily target key rotation per action
  2. "Calling the Hasura management API at runtime to generate secrets to use"; this is a red herring, since it would require special care/state to manage those keys, permissions for them and would make it much harder to do migrations, with the keys as part of the migrations, which in turn would force you to encrypt the keys with a certificate, which in turn would force you to implement PKI to get that certificate -- so now you've just moved the problem further down the line and made a solution noone will actually use -- because it's too complicated

Example implementation

https://github.com/haf/haskell-nextjs-webhooks (request id not yet added)

Adverse effects

Alternatives considered

njaremko commented 4 years ago

Up until I hit "Objections", the proposal looks (mostly) fine. Almost the entirety of that section is you throwing shade at various people.

With that out of the way:

"How does this work":

"Handling key rotation"

Any per-action variables would be included in the headers, and would be included in the proposed payload HMAC header anyway

"Alternatives suggested"

  1. Providing the option to include a HASURA_ACTION_SECRET env var to hasura, and automatically have it included in all action payloads is easier to implement (both in hasura, and in the action handlers), and would fit 99% of use cases. If any of your secrets are compromised, you should probably rotate all your secrets anyway.
  2. You've misquoted, this solution is to generate a secret key at startup of your action handler (assuming a non-serverless handler service), and use the hasura management API to configure the corresponding hasura action to use that key. This type of solution is very common in practice, and solves the key rotation issue (just restart the service). None of the issues you mentioned in response to this are valid
    • the only "key" being managed here is that your service has the hasura admin secret, which is already very common
    • This doesn't affect your migrations, you just define your metadata without a secret key header, and the handler will configure it at runtime.

"Adverse effects"

"Alternatives considered"

haf commented 4 years ago

We'll be going round and round on the "objections" bit, but that is honestly not why I'm here. So here's my final take on your objections-objections.

I don't care that you personally are lax in your security posture, and I don't agree with your take that "TLS is enough for 99%". I think that kind of talk generally sabotages software security and is the kind of talk that junior devs/ops do. If you make statements like that, show me the research so I can verify your statement. I would put it at being enough for 50% of the cases, the other 50% being internal apps without CAs.

As for the links, I haven't been able to find evidence they support HTTP webhooks, but it's beside the point anyway as my other attack vectors still stand.

You're also wrong that larger companies are more secure as a rule of thumb, again reflecting inexperience in my eyes. Larger companies instead spend on monitoring software, they put TLS-intercepting middleware at the edges, do "trusted subsystem" security instead of zero-trust networks, have AV-software on servers. Focus on auditing over improving skills. Putting up an intermediate CA:s is a two-year project for large companies.

Also, developer/ops skill is a regressive function of time with diminishing returns on investment. As such, IRL, you have a long tail of very unskilled developers (the dark matter) and if the Y-axis is skill-level; a narrow, tall peak of very skilled developers. This plays out that most large companies have really bad security, primarily by having buggy in-house apps, secondarily by having unknown unknowns due to the division of labour creating blind spots. They also have a huge attack surface most of the time, from the large number of artifacts they operate.

Hasura sells to these larger companies, and in the light of the above, should do everything to provide security out of the box, with opt-in to lesser security, rather than the other way around. As such; actions over HTTP is a very realistic scenario for their business model.

I would be happy if you reiterate your comments and for every statement back it up with peer reviewed research, or if you wish, anecdotes.

I hope we can focus on the merits of the proposal, going forward, instead of making hand-wavy statements about the state of the world of software and what is a "realistic risk". Not that I don't enjoy it, over a beer 🍻, that is.


For 3:

Is your proposed solution to use the derived key as a shared secret?

Yes

If so, then why are you introducing HMAC at all? One of the major benefits of introducing HMAC is that you don't have to include a secret that can be compromised in the payload, but you're still proposing we include one.

No I don't. That's not what a shared secret means here. The HMAC is proof the sender knows the shared secret.

Just use any randomly generated key,

This would require saving it somewhere and configuring it in Hasura; now you have to know two keys instead of one. So this is out.

use (insert any secure cryptographic hashing function) of "secret key" + "action name" (+ "key rotation number?")

The most commonly used secure cryptographic hashing functions are Merkle–Damgård constructions; SHA1, SHA2/SHA256, and are as such, vulnerable to length extension attacks. This goes primarily for the body message; however, I use HMAC for consistency, and there should be no negative effects of using it like this.

For (4), header should follow proper header etiquette, something like X-Hasura-Signature

https://tools.ietf.org/html/rfc6648 -- doing it like you suggest is deprecated since a long time back.

Any per-action variables would be included in the headers, and would be included in the proposed payload HMAC header anyway

This suggestion is that there's a separate per-action key-value list that is semantically known to the Hasura engine.

Providing the option to include a HASURA_ACTION_SECRET env var to hasura, and automatically have it included in all action payloads is easier to implement

No, I disagree. Can you back this up, please? I just learnt a language and wrote the code in a day, showing you my suggestion was easy to implement; where's your evidence?

If any of your secrets are compromised, you should probably rotate all your secrets anyway.

No, I disagree. Again, perhaps you have a different mental model for how you manage enterprise/ops secrets, but the way I do it is by segregating them into RBAC-specific vaults in 1Password so that you can have granular rotation. It's called compartmentalisation. For secrets like the action-keys discussed here, I use something called sealed-secrets together with PKI, so I can use a gitops flow. For plaintext secrets available in Kubernetes, I again use RBAC to delimit who can see what secrets, and as such allow for key rotation when people who have had access to these plaintext secrets change jobs/roles. Perhaps you have a different experience? I would really like to hear you backing up your statements.

You've misquoted, this solution is to generate a secret key at startup of your action handler (assuming a non-serverless handler service), and use the hasura management API to configure the corresponding hasura action to use that key.

Ok, then how do you do this with a gitops type flow? Doesn't your suggestion require manual intervention? How do you do when you cluster your action handlers? Does every action handler generate its own secret then?

This type of solution is very common in practice, and solves the key rotation issue (just restart the service).

Are you serious? To spell it out; normally your software should be designed so it can be restarted multiple times a day.

the only "key" being managed here is that your service has the hasura admin secret, which is already very common

This is a huge security risk! It's not common when I deploy Hasura, I can promise you that.

Crypto isn't particularly slow, but it will affect your throughput. A significant number of JSON parsers are streaming parsers, so having to buffer, validate, and parse the whole payload before you get started can be significant, so we should acknowledge it.

I disagree again; a very tiny number of JSON parsers are streaming; because you have to code around them (they cannot just be slotted in place of a regular parser). Can you back up your claim, please?

Hasura would probably end up having to make a signature validation library to automate the process in at least JS (similar to stripe).

Maybe, maybe not. If I can build one in a few hours, I don't see the problem. This is not an adverse effect, it's just programming.

If you're going to encourage use of HMAC, people will use it with http endpoints

This doesn't follow logically, so your whole point here it moot.

I doubt anybody would suggest legacy hashes like sha1 or md5 in a brand new hmac flow

I don't doubt they would, as MD5 hashes are mostly concerned with pre-image/tunnelling attacks, but if it's in the middle of a HMAC, that risk is mitigated. Also MD5 is faster and this scheme is implemented much more frequently (from researching this before sending this RFC), including when you only want to validate against content corruption in transit. (See e.g. Google Cloud Storage's MD5 hashes). All in all, though, I think it's better to aim at "easy to implement" and do this simpler scheme, suggested in this thread.

webdeb commented 4 years ago

I spent today learning Haskell and building it https://github.com/haf/haskell-nextjs-webhooks

Really cool. I value the effort you made to show thats it's no magic at all.

I think, since Hasura highly depends on external Services it just makes sense to have a solid approach to authenticate hasuras requests to the outer world. JWT is how hasura authenticates incoming requests, and HMAC is conceptually the same.

Looking forward to see how this discussion will go on. 👍

njaremko commented 4 years ago

I'm going to skip over the first half.

For 3:

Is your proposed solution to use the derived key as a shared secret?

Yes

If so, then why are you introducing HMAC at all? One of the major benefits of introducing HMAC is that you don't have to include a secret that can be compromised in the payload, but you're still proposing we include one.

No I don't. That's not what a shared secret means here. The HMAC is proof the sender knows the shared secret.

Ah, so you want to derive this key in hasura, and then share that with your action as the signing key?

Just use any randomly generated key,

This would require saving it somewhere and configuring it in Hasura; now you have to know two keys instead of one. So this is out.

Unless you're planning to give your action handlers the admin secret (which you seem to be against) to derive the same key on their end, you're still deriving and having to configure this additional key in the action handler. You still have two keys.

use (insert any secure cryptographic hashing function) of "secret key" + "action name" (+ "key rotation number?")

The most commonly used secure cryptographic hashing functions are Merkle–Damgård constructions; SHA1, SHA2/SHA256, and are as such, vulnerable to length extension attacks. This goes primarily for the body message; however, I use HMAC for consistency, and there should be no negative effects of using it like this.

The links seem to imply you think I'm unfamiliar with these things. A length extension attack on the derived key would first require it to be exposed to an attacker (which shouldn't happen, since it's not included in the payload), then it would require knowledge about the original message, which no one would have in this scenario.

For (4), header should follow proper header etiquette, something like X-Hasura-Signature

https://tools.ietf.org/html/rfc6648 -- doing it like you suggest is deprecated since a long time back.

Existing hasura headers follow this format, and the deprecation you've referenced above is only for public facing situations. They explicitly do not suggest against it in private server to server communication.

Any per-action variables would be included in the headers, and would be included in the proposed payload HMAC header anyway

This suggestion is that there's a separate per-action key-value list that is semantically known to the Hasura engine.

Why do you want this separate list?

Providing the option to include a HASURA_ACTION_SECRET env var to hasura, and automatically have it included in all action payloads is easier to implement

No, I disagree. Can you back this up, please? I just learnt a language and wrote the code in a day, showing you my suggestion was easy to implement; where's your evidence?

No one has claimed that adding a crypto dependency and calling a function is hard. It's just not as easy. I don't know how you can argue that basic string comparison isn't easier. Further, in the context of your example webhook providers above, people have been messing up the implementation of the verifier in their application since day 1.

If any of your secrets are compromised, you should probably rotate all your secrets anyway.

No, I disagree. Again, perhaps you have a different mental model for how you manage enterprise/ops secrets, but the way I do it is by segregating them into RBAC-specific vaults in 1Password so that you can have granular rotation. It's called compartmentalisation. For secrets like the action-keys discussed here, I use something called sealed-secrets together with PKI, so I can use a gitops flow. For plaintext secrets available in Kubernetes, I again use RBAC to delimit who can see what secrets, and as such allow for key rotation when people who have had access to these plaintext secrets change jobs/roles. Perhaps you have a different experience? I would really like to hear you backing up your statements.

Obviously RBAC is a good idea. I was suggesting that if any of your keys are compromised by an attacker, you should probably rotate all your keys in case any other keys were compromised. Since you're apparently under attack.

Earlier you were saying we should be thinking about legacy enterprise companies that can't handle fancy configurations. Now your suggesting RBAC for backend secrets, sealed-secrets, PKI, gitops, and kubernetes, which these legacy enterprise companies that you were worried about earlier presumably aren't messing with.

You've misquoted, this solution is to generate a secret key at startup of your action handler (assuming a non-serverless handler service), and use the hasura management API to configure the corresponding hasura action to use that key.

Ok, then how do you do this with a gitops type flow? Doesn't your suggestion require manual intervention? How do you do when you cluster your action handlers? Does every action handler generate its own secret then?

This type of solution is very common in practice, and solves the key rotation issue (just restart the service).

Are you serious? To spell it out; normally your software should be designed so it can be restarted multiple times a day.

So? Restart as much as you want. Nothing I've described prevents this.

the only "key" being managed here is that your service has the hasura admin secret, which is already very common

This is a huge security risk! It's not common when I deploy Hasura, I can promise you that.

It's only a huge security risk if:

Having action handlers use the admin secret is extremely popular, and encouraged by the people at hasura. Tanmai has given whole presentations about how you can use Hasura as a data service to do connection pooling and have type safe queries in your serverless backend (using the admin secret).

Crypto isn't particularly slow, but it will affect your throughput. A significant number of JSON parsers are streaming parsers, so having to buffer, validate, and parse the whole payload before you get started can be significant, so we should acknowledge it.

I disagree again; a very tiny number of JSON parsers are streaming; because you have to code around them (they cannot just be slotted in place of a regular parser). Can you back up your claim, please?

Anyone that deals with large payloads ends up switching to streaming parsers. They're not particularly difficult to use: https://docs.serde.rs/serde_json/struct.StreamDeserializer.html

Hasura would probably end up having to make a signature validation library to automate the process in at least JS (similar to stripe).

Maybe, maybe not. If I can build one in a few hours, I don't see the problem. This is not an adverse effect, it's just programming.

You have to account for:

All of that is effort. Effort that could arguably be used on something else.

If you're going to encourage use of HMAC, people will use it with http endpoints

This doesn't follow logically, so your whole point here it moot.

How is this hard to follow? You're presenting an option that allows secure authentication over http. People will do that. So you should include replay attack mitigations.

I doubt anybody would suggest legacy hashes like sha1 or md5 in a brand new hmac flow

I don't doubt they would, as MD5 hashes are mostly concerned with pre-image/tunnelling attacks, but if it's in the middle of a HMAC, that risk is mitigated. Also MD5 is faster and this scheme is implemented much more frequently (from researching this before sending this RFC), including when you only want to validate against content corruption in transit. (See e.g. Google Cloud Storage's MD5 hashes). All in all, though, I think it's better to aim at "easy to implement" and do this simpler scheme, suggested in this thread.

It's true that HMAC-MD5 has no known attacks, but every company you've referenced is using the SHA family in their hmac. Nobody designs their signature scheme by choosing the most likely to be broken option. MD5 being used at google for in-transit integrity checks is irrelevant here.

njaremko commented 4 years ago

It also seems this could be broken into two separate proposals:

  1. Do HMAC authentication at all. Something like using existing hasura convention of having HASURA_SIGNATURE_KEY env var defined causes hasura to use do an HMAC flow on actions, and probably elsewhere too.
  2. Your proposed key generation and rotation stuff.
haf commented 4 years ago

Why do you want this separate list?

So that items in this separate list are not sent with the request, but are semantically understood by the Hasura engine.

Ah, so you want to derive this key in hasura, and then share that with your action as the signing key?

Yes, see sample; Hasura is sender.

you're still deriving and having to configure this additional key in the action handler. You still have two keys.

You only need the master key saved; the remainder can be generated with the Hasura CLI, for example. It's about reducing cognitive and ops load.

A benefit of this approach is that you don't have to configure the action at all in hasura metadata, you can move your action handler wherever and so long as hasura's endpoint is known, it can configure itself on startup if it's missing.

It would seem you're suggesting administering Hasura remotely from the Action Handlers? Right?

Putting TLS interception on your production backend servers is beyond dumb, but even in that case it's still only the intercepting box that sees anything, and that too is owned by you. Nothing is transmitted in plaintext.

It's not beyond dumb. It's defense in depth, as you don't know if a prod server gets hacked and starts exfiltrating data. Even a docker container could get hacked. Istio does TLS interception throughout, and noone is calling that dumb. Furthermore, it's not only about transmission of plaintext/interception thereof, it's also about the verification of the sender identity.

All of that is effort. Effort that could arguably be used on something else.

I argue the effort should be spent, because it's not much effort.

How is this hard to follow? You're presenting an option that allows secure authentication over http. People will do that. So you should include replay attack mitigations.

It's hard to follow, because it's not logical. It's not mutually exclusive to use TLS and to verify senders: that attack is outside the scope of this RFC. If you want to amend my RFC, please do so by specifying how the state of those timestamps should be handled, as well as if it should fail closed or fail open, as well as why a timestamp that can repeat itself (clocks can go backward, and not advance at all) is a better option than a high-entropy request id.

It also seems this could be broken into two separate proposals:

Yes, but I don't think it should, not right now. I see this as one feature, with two tasks corresponding to your two points.

Anyone that deals with large payloads ends up switching to streaming parsers.

Yes, and now we're talking 50MiB+ payloads in my experience. It's not that common to have that large payloads, and if you do, there could be an option to opt-out of HMAC on a per-action basis. Look, for example at Azure ServiceBus — it has a limit of 256 KiB per message, and if you pay more, you get to send 1MiB messages.

Do HMAC authentication at all. Something like using existing hasura convention of having HASURA_SIGNATURE_KEY env var defined causes hasura to use do an HMAC flow on actions, and probably elsewhere too.

No, this is not what I'm suggesting. Let's avoid over-complicating the life of ops with lots of keys; if we define an admin key, you should get the HMAC header automatically.

[using the admin key] It's only a huge security risk if:

I beg to differ; it's the keys to the kingdom of your database. Developer demos is one thing; production is another. This RFC defines a way to segregate keys by action name. Perhaps if everyone was using per-service databases, you'd be right, but IRL relational databases grow to be huge.

njaremko commented 4 years ago

You're answer is shifting quite a bit above me while I write this, but I'll try and keep my answers up to date...

You only need the master key saved; the remainder can be generated with the Hasura CLI, for example. It's about reducing cognitive and ops load.

You haven't described how you intend to get this generated secret onto the action handler. If you're manually sharing it, then there's not much benefit over a randomly generated key. Are you suggesting that the hasura CLI can be run on the action handler box to fetch the secret?

It would seem you're suggesting administering Hasura remotely from the Action Handlers? Right?

Yes, though note that I'm not suggesting this, as I don't do it. I was merely trying to give that proposed solution a fair shake, as you were not describing it accurately in your initial post.

Putting TLS interception on your production backend servers is beyond dumb, but even in that case it's still only the intercepting box that sees anything, and that too is owned by you. Nothing is transmitted in plaintext.

It's not beyond dumb. It's defense in depth, as you don't know if a prod server gets hacked and starts exfiltrating data. Even a docker container could get hacked. Istio does TLS interception throughout, and noone is calling that dumb. Furthermore, it's not only about transmission of plaintext/interception thereof, it's also about the verification of the sender identity.

All of that is effort. Effort that could arguably be used on something else.

I argue the effort should be spent, because it's not much effort.

Implementing the library probably wouldn't be. But documenting it well, providing support, and maintaining it going forward (forever) are all non-trivial.

How is this hard to follow? You're presenting an option that allows secure authentication over http. People will do that. So you should include replay attack mitigations.

It's hard to follow, because it's not logical. It's not mutually exclusive to use TLS and to verify senders: that attack is outside the scope of this RFC. If you want to amend my RFC, please do so by specifying how the state of those timestamps should be handled, as well as if it should fail closed or fail open, as well as why a timestamp that can repeat itself (clocks can go backward, and not advance at all) is a better option than a high-entropy request id.

It also seems this could be broken into two separate proposals:

Yes, but I don't think it should, not right now. I see this as one feature, with two tasks corresponding to your two points.

Except it's not one feature. You're introducing two separate features and pretending they should be bundled together.

Yes, and now we're talking 50MiB+ payloads in my experience. It's not that common to have that large payloads, and if you do, there could be an option to opt-out of HMAC on a per-action basis. Look, for example at Azure ServiceBus — it has a limit of 256 KiB per message, and if you pay more, you get to send 1MiB messages.

You've ignored other peoples anecdotes, so we'll ignore yours here as well. There exist people that will do this frequently. Introducing "opt-out" functionality would be another thing that needs to be managed. You're Azure example is irrelevant, so not sure why you included it.

No, this is not what I'm suggesting. Let's avoid over-complicating the life of ops with lots of keys; if we define an admin key, you should get the HMAC header automatically.

It's not a lot of keys. It's one key. You're proposal is to do a bunch of key derivation that some dev ops person is going to have to understand and config. Having one global signing key that needs to be propagated everywhere is significantly easier to understand and manage.

I beg to differ; it's the keys to the kingdom of your database. Developer demos is one thing; production is another. This RFC defines a way to segregate keys by action name. Perhaps if everyone was using per-service databases, you'd be right, but IRL relational databases grow to be huge.

This wasn't developer demos, it was suggestions for how to do production services. Your derived key proposal doesn't solve this problem. You still need the admin key if you want to do non-requesting-user scoped stuff. A separate, per action, scoped permission admin key could be proposed, but that's not what we're talking about here.

njaremko commented 4 years ago

Aside: Tanmai is giving a 3 hour "Enterprise-grade GraphQL Authorization" talk tomorrow (9am PST - 12) for Hasura Con, so maybe he'll blow our minds and present something awesome.

haf commented 4 years ago

You haven't described how you intend to get this generated secret onto the action handler. If you're manually sharing it, then there's not much benefit over a randomly generated key. Are you suggesting that the hasura CLI can be run on the action handler box to fetch the secret?

I'm just saying there's a little benefit to doing it this way, compared to randomly generating it. Doing it this way makes it possible to avoid saving action keys anywhere; if you have the admin key, you can always generate the action keys, but this doesn't hold for randomly generated keys, as they would have to be configured in Hasuras metadata and managed in 1P or similar, separate to the admin key, for ops.

It would seem you're suggesting administering Hasura remotely from the Action Handlers? Right? Yes, though note that I'm not suggesting this, as I don't do it. I was merely trying to give that proposed solution a fair shake, as you were not describing it accurately in your initial post.

I would be against "administering up" by principle; lower privileged services (as denoted by them having per-action-handler-keys) should not be able to administer higher privileged services like Hasura.

You're presenting an authentication method that allows people to securely verify the sender without TLS...so people will do that at some point.

Maybe, maybe not. This feature doesn't push people to use plaintext IMO.

Except it's not one feature. You're introducing two separate features and pretending they should be bundled together.

I'm not pretending, I'm saying that I think they should. :D

I'm not saying you have to use timestamps, I'm saying you have to account for replay attacks in your signature. A nonce is fine too. Just let people keep track of repeated requests if they want to.

I don't "have to" do this. I could potentially do it, but the questions I outlined remains; how do you manage that state on the receiving side. You mention the maintenance burden; having this to maintain as well is a drastic change in the amount of maintenance needed.

Having one global signing key that needs to be propagated everywhere is significantly easier to understand and manage.

To understand, maybe, to do key rotation of, when you have ten different deployment units, each with an action handler to reconfigure when you do key rotation; not so much. Then it's harder to manage.

"Enterprise-grade GraphQL Authorization"

This is not authorization.


I think we need to agree to disagree here. I'm not getting much out of refuting your every suggestion and you're not providing any constructive critique to my suggestion either.

haf commented 4 years ago

BTW; @njaremko I'm happy to include a request id (UUID v4) in the RFC, as a separate header and as part of the MAC if you would like me to?

njaremko commented 4 years ago

BTW; @njaremko I'm happy to include a request id (UUID v4) in the RFC, as a separate header and as part of the MAC if you would like me to?

That's all I ask :)

I would be against "administering up" by principle; lower privileged services (as denoted by them having per-action-handler-keys) should not be able to administer higher privileged services like Hasura.

Yep, I'm not really for it either, but figured it should be fairly described.

I'm not pretending, I'm saying that I think they should. :D

"Pretending" was poor word choice on my part, I apologize. Agree to disagree on this one.

This is not authorization.

I'm aware, but who knows what they'll show, they've been hinting at some secret stuff they wanted to announce in the community calls.

I think we need to agree to disagree here. I'm not getting much out of refuting your every suggestion and you're not providing any constructive critique to my suggestion either.

Oof, I was just starting to like you too...you haven't refuted anything above, and I have provided constructive critiques. Not sure why you like talking down to people so much.

haf commented 4 years ago

Oof, I was just starting to like you too...you haven't refuted anything above, and I have provided constructive critiques. Not sure why you like talking down to people so much.

I think you like to talk down to people, a little bit, yourself. I just don't feel that the criticism is constructive; it's aimed to finding flaws in my reasoning and stated in a manner that assumes the "found flaw" is true, not in trying to improve my change suggestion (except the replay-case, and I have added that to the RFC now). Like, right now, you suggest something like the action handler administering the Hasura service, when you, yourself, think it's a bad idea.

njaremko commented 4 years ago

I think you like to talk down to people, a little bit, yourself.

I'll admit that I said something mean in the previous github issue, but this issue has been quite civil on my end. You on the other end have been saying lots of fun stuff like:

It's like a case study on appeals to authority.

I just don't feel that the criticism is constructive; it's aimed to finding flaws in my reasoning and stated in a manner that assumes the "found flaw" is true, not in trying to improve my change suggestion (except the replay-case, and I have added that to the RFC now).

The funny thing is, I'm not even against HMAC being added to Hasura. I think it's a totally fine idea. I'm less excited about your key derivation stuff, but that's fine if other people find it valuable. I haven't been trying to find flaws in your reasoning, I've been trying to correct misleading/false things you've stated as true.

Like, right now, you suggest something like the action handler administering the Hasura service, when you, yourself, think it's a bad idea.

I did not suggest it, I presented it fairly, since you introduced it unfairly.


I'm going to bow out, I've said my piece, and any further discussion is probably not going to be productive. I trust the people at Hasura to do the right thing here. Best of luck with this issue.

haf commented 4 years ago

I didn’t say they were dumb, I said inexperienced: “enough for 99%” and I stand by that comment. Same for the authority arguments, that are valid, since I know what I’m talking about. You on the other hand haven’t either given any credentials nor supported your reasoning with research or samples.

Perhaps next time you should focus less on trying to correct me and more on trying to correct the change request, and it will go better for you.

Good luck in the future, to you as well.

On Wednesday, Jun 17, 2020 at 11:37 PM, Nathan Jaremko <notifications@github.com (mailto:notifications@github.com)> wrote:

I think you like to talk down to people, a little bit, yourself.

I'll admit that I said something mean in the previous github issue, but this issue has been quite civil on my end. You on the other end have been doing lots of:

(paraphrased, since you've removed it now) "still-in-uni" people are dumb, this isn't hard, see I did it "I think that kind of talk generally sabotages software security and is the kind of talk that junior devs/ops do" "I've given lectures on this" etc

It's like a case study on appeals to authority.

I just don't feel that the criticism is constructive; it's aimed to finding flaws in my reasoning and stated in a manner that assumes the "found flaw" is true, not in trying to improve my change suggestion (except the replay-case, and I have added that to the RFC now).

The funny thing is, I'm not even against HMAC being added to Hasura. I think it's a totally fine idea. I'm less excited about your key derivation stuff, but that's fine if other people find it valuable. I haven't been trying to find flaws in your reasoning, I've been trying to correct misleading/false things you've stated as true.

Like, right now, you suggest something like the action handler administering the Hasura service, when you, yourself, think it's a bad idea.

I did not suggest it, I presented it fairly, since you introduced it unfairly.

I'm going to bow out, I've said my piece, and any further discussion is probably not going to be productive. I trust the people at Hasura to do the right thing here. Best of luck with this issue.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub (https://github.com/hasura/graphql-engine/issues/5112#issuecomment-645638643), or unsubscribe (https://github.com/notifications/unsubscribe-auth/AABPEW4VSK7TWNWBY4QSDK3RXEZTJANCNFSM4N76DXYQ).