OWASP / ASVS

Application Security Verification Standard
Creative Commons Attribution Share Alike 4.0 International
2.75k stars 669 forks source link

Request to add Item for checking JSON Interoperability related vulnerabilities to ASVS #1538

Closed ImanSharaf closed 9 months ago

ImanSharaf commented 1 year ago

I came across a research article that highlights the security risks associated with JSON interoperability and shows how JSON parsing inconsistencies can mask serious business logic vulnerabilities. The article explains how discrepancies across JSON parsers combined with multi-stage request processing can introduce serious vulnerabilities. Here are four of them:

Given the importance of JSON in web application communications and the security risks it poses, I believe it is crucial to add a new item to the ASVS to address these security risks. This would ensure that web applications that follow the ASVS guidelines are secure against JSON interoperability vulnerabilities.

tghosth commented 1 year ago

Agree this is an interesting question.

I need to review the attached research paper in the context of the items above.

Note that Jim suggested the following replacement.

5.3.6 [MODIFIED] Verify that the application protects against JSON injection attacks. (C4) 75

with

5.3.6 [MODIFIED] Verify that the application protects against JSON injection attacks by sanitizing JSON before JSON parsing. (C4) 75

Note that Elar also noted that a sanitization requirement should be moved to 5.2.

ImanSharaf commented 1 year ago

I believe that 5.3.6 doesn't fully cover JSON interoperability vulnerabilities, what do you think about this one:

tghosth commented 1 year ago

This feels really hard to actually verify.

Again I would prefer to avoid a list of attacks but rather focus on the defense so maybe:

Verify that any different JSON parsers used in the application performing parsing in a consistent way to avoid JSON interoperability vulnerabilities.

tghosth commented 1 year ago

This feels really hard to actually verify.

Again I would prefer to avoid a list of attacks but rather focus on the defense so maybe:

Verify that any different JSON parsers used in the application performing parsing in a consistent way to avoid JSON interoperability vulnerabilities.

@elarlang @ImanSharaf @jmanico do you approve of this wording?

ImanSharaf commented 1 year ago

Verify that any different JSON parsers used in the application performing parsing in a consistent way to avoid JSON interoperability vulnerabilities.

This appears to be satisfactory. @tghosth

elarlang commented 1 year ago

I just wonder, how many people will understand, what it means and why this requirement exists.

jmanico commented 1 year ago

I don't think there is anything wrong with using different JSON parsers in an app as long as they are updated. I actually... I actually..... agree... with Elar.

tghosth commented 1 year ago

@elarlang we could add the suggested article as a footnote https://bishopfox.com/blog/json-interoperability-vulnerabilities

@jmanico the problem isnt using different parsers, the problem is if they have different parsing behaviours which lead to vulnerabilities.

My question here would be do we want this to be more general than just JSON? I guess conceptually it could be a problem with any inconsistent parsing behaviour

elarlang commented 1 year ago

Link to materials makes sense - if someone reads the document.

Ok, after one parser gives one output, another parser gives another output.

But you don't use 2 parsers for the same message for decoding? After decoding the JSON, you need to validate all items. Technically it's interesting toy to play with, but maybe I have not connected all dots to understand the risk in everyday practice. Anyway, if one can prove it to be an issue, it's way to go.

jmanico commented 1 year ago

Are you telling me that all JSON parsers are insecure? Let's be clear: having more parsers doesn't inherently make your system less secure. The real issue lies in using outdated or known insecure processors, and that's something we address separately.

Security isn't a one-size-fits-all game. It's about understanding the nuances and making informed decisions. So, let's not throw the baby out with the bathwater here. Instead, let's focus on keeping our parsers up-to-date and well-configured, and let's pay attention to known vulnerabilities. That's how you build a robust security posture, not by making sweeping generalizations.

tghosth commented 1 year ago

Hi @jmanico, I think there is a misunderstanding here. We are not talking about vulnerable JSON parsers, we are talking about a situation where security assumptions are made based on the behaviour of one parser but the in fact a different parser behaves differently. Not necesarily insecurely, just differently.

For example, look at this example from the blog post linked above:

image

elarlang commented 1 year ago

@tghosth , @ImanSharaf

But you don't use 2 parsers for the same message for decoding? After decoding the JSON, you need to validate all items. Technically it's interesting toy to play with, but maybe I have not connected all dots to understand the risk in everyday practice.

What are the pre-conditions for using this vulnerability to be a successful attack?

tghosth commented 1 year ago

@elarlang did you read through this link: https://bishopfox.com/blog/json-interoperability-vulnerabilities

It does give quite a few different examples

elarlang commented 1 year ago

I read it "diagonally". My point is, we should provide here "TLDR" and conclusion - what is the attack scenario we try to take down with this new potential requirement.

tghosth commented 1 year ago

OK we have two micro-services. It enforces some security control based on the "user_id" key in a JSON object, let's say it checks whether the user_id field matches the logged in user because we are going to do something sensitive for that user. It parses the JSON object, there are two "user_id" keys. One has a value of "fred" and the other has the value of "fake" and this particular parser's behaviour is to combine the key values. We are logged in as "fredfake" so this successfully matches and the check succeeds.

The JSON then gets passed to a different microservice. This one assumes that the check has already been done so it can just process the JSON. It parses the JSON but this particular parser's behaviour is to just read the first value and ignore the duplicate one so it reads "fred" as the "user_id" value and performs a sensitive operation on "fred" even though they are not the logged in user.

The overall concept is to avoid security assumptions being made on parser behaviour @elarlang

maizuka commented 1 year ago

The architecture requirement 1.1.6 says that security controls should be centralized.

Verify implementation of centralized, simple (economy of design), vetted, secure, and reusable security controls to avoid duplicate, missing, ineffective, or insecure controls.

So is it recommended these?

tghosth commented 1 year ago

In principle it is either about ensuring the multiple parsers have the same behaviour or that the check and the use is in the same parser I think

jmanico commented 1 year ago

ChatGPT help:

Using two different JSON parsers in the same application can introduce a variety of security risks, particularly around inconsistency and complexity. Here's a more technical breakdown:

Attack Surface: Increased Attack Surface: Two parsers mean two sets of vulnerabilities. If an attacker can't exploit one, they might be able to exploit the other. Inconsistency: Inconsistent Behavior: If the two parsers handle edge cases differently, it can create security vulnerabilities. For instance, if one parser is more lenient in accepting malformed JSON, it could be exploited for injection attacks.

Inconsistent Validation: It's possible for one parser to have a different validation mechanism than another. For example, one might automatically decode encoded entities, making it possible to bypass security controls like input validation filters.

Complexity: Increased Complexity: More components generally mean more complexity, and complexity is the enemy of security. The more complex a system is, the harder it is to secure.

Configuration Challenges: Each parser might require its own set of configurations for secure operation. Managing these configurations, and ensuring they are securely implemented, becomes challenging.

Version Management: Keeping both parsers up to date is twice as much work. An outdated parser can have known vulnerabilities that are exploitable.

Data Flow: If JSON data is passed between the two parsers for some reason (e.g., one parser is used for an initial quick validation, and another is used for a more complex operation), you have to ensure that the security controls are consistent across the data flow, which is hard to manage.

Logging and Monitoring: Disparate Logging: Two parsers may log differently, making it difficult to track malicious activity that spans both.

Error Handling: Inconsistent error messages between the two parsers could leak sensitive information or confuse administrators/developers.

Recommendations: If you have to use two JSON parsers, consider the following:

Least Privilege: Configure both to operate on the principle of least privilege. They should only have the minimum necessary permissions to function.

Consistent Configuration: Ensure that both parsers are configured as similarly as possible to reduce the chances of inconsistent behavior.

Up-to-Date: Keep both parsers updated to their latest secure versions.

Security Auditing: Continuously audit and monitor both parsers for vulnerabilities and unexpected behavior.

Uniform Logging: Implement a unified logging mechanism that consolidates logs from both parsers in a consistent manner for easier analysis.

Thorough Testing: Both unit tests and security tests like fuzzing should be conducted to ensure that the parsers behave consistently and securely.

In summary, the main risk boils down to the increased complexity and inconsistency that come with using two different JSON parsers. You're not just doubling your work, but potentially squaring it, as you have to consider interactions between the two parsers as well. The "rule of least complexity" is usually a good principle to follow in security, and this situation is no exception.

tghosth commented 1 year ago

@elarlang

So maybe I would update the wording to:

Verify that different JSON parsers used in the application, perform parsing in a consistent way to avoid vulnerabilities caused by different parsing interpreting edge cases differently.

Do you think this makes it any easier to understand the reason here?

elarlang commented 1 year ago

Yes, it is (at least for me) more understandable, and maybe event to mention those "features" there as an example (JSON Interoperability)

tghosth commented 1 year ago

@elarlang do you mean like:

Verify that different JSON parsers used in the application, perform parsing in a consistent way to avoid vulnerabilities caused by different parsing interpreting edge cases differently. For example, how duplicate keys are processed.

elarlang commented 1 year ago

With proposing "JSON Interoperability" my intent was to provide "Google this keyword" solution to understand the requirement.

From proposed requirement text maybe we don't need "edge cases" - just if they work differently it's already problematic.

And now comes the next set of questions:

tghosth commented 1 year ago

Ah ok, I see what you mean.

So I would go with:

Verify that different JSON parsers used in the application, perform parsing in a consistent way to avoid JSON Interoperability vulnerabilities caused by different parsers interpreting edge cases differently.

For category, V13.1 as this is about parsers and in fact is similar to 13.1.1/

For CWE, according to this it should potentially be CWE-436.

Level, I would say 1 for now.

elarlang commented 1 year ago

From proposed requirement text maybe we don't need "edge cases" - just if they work differently it's already problematic.

This "edge cases" is getting actually more.. undefined thing.

Level - I was more thinking more about between level 2 and level 3, clearly not level 1. It is not argumented for me based on testability or based on risks.

ok with CWE and category.

tghosth commented 1 year ago

So like this?

Verify that different JSON parsers used in the application, perform parsing in a consistent way to avoid JSON Interoperability vulnerabilities.

I could live with L2 since it is a little niche.

elarlang commented 1 year ago

yes

tghosth commented 1 year ago

Sorry to drive you crazy, @elarlang @ImanSharaf

Do you think we should specify JSON parsers or make it more general like

Verify that different parsers used in the application for the same data type (e.g. JSON parsers, XML parsers), perform parsing in a consistent way to avoid issues such as JSON Interoperability vulnerabilities.

elarlang commented 1 year ago

this abstraction makes sense

jmanico commented 1 year ago

As long as you keep your parsers up to date (which we already cover) this is not a security issue.

You’re all chasing a functionality problem, not a security problem.

elarlang commented 1 year ago

As long as you keep your parsers up to date (which we already cover) this is not a security issue. You’re all chasing a functionality problem, not a security problem.

@jmanico - how it is aligned with your previous comment? https://github.com/OWASP/ASVS/issues/1538#issuecomment-1753077142

Even if it is "just a functionality problem" - incorrectly working functionality is cause of integrity (if one interpretator "translates" value one way and another another way)

maizuka commented 1 year ago

I am asking questions for my understanding:

1) How easy is it to guarantee that different parsers have the same behavior? The behavior of each parser is feature dependent, so they should not be responsible for behavior that is not defined in standards such as RFCs.

2) Why would you need to allow multiple parsers for user JSON? In general, it's hard to imagine a situation where different parsers repeatedly decode the same object. There may be cases where multiple parses are performed in parallel processing for performance reasons, but this can be avoided by providing one parser at the front.

elarlang commented 1 year ago

I agree with @maizuka concerns and also previous comment (https://github.com/OWASP/ASVS/issues/1538#issuecomment-1749865787).

I also can understand technically, how the problem may occur, but I can find it so edge case, that I don't mind to keep this requirement away from ASVS. If it goes in, I'm ok with last proposed wording.

tghosth commented 1 year ago
  1. Why would you need to allow multiple parsers for user JSON? In general, it's hard to imagine a situation where different parsers repeatedly decode the same object. There may be cases where multiple parses are performed in parallel processing for performance reasons, but this can be avoided by providing one parser at the front.

Easiest example I can think of is if you different microservices in different languages but both need to parse JSON.

  1. How easy is it to guarantee that different parsers have the same behavior? The behavior of each parser is feature dependent, so they should not be responsible for behavior that is not defined in standards such as RFCs.

So this is a good question, how is that actually actionable.

I wonder if we should refocus this to require that only a single parser is used on the assumption that being sure about parser behaviour in edge-cases is problematic.

What do you think @maizuka @elarlang ?

elarlang commented 1 year ago

Easiest example I can think of is if you different microservices in different languages but both need to parse JSON.

And in this client communicates directly with both microservices? Not like sending JSON to one service and then this one service communicating with another service? and both services do input validation? Then we are against single vetted solutions (1.1.6) already.

So, yes, technically it's possible to do it that way, but... I'm not sure I could call it nice architecture choice. Maybe BFF (Backend for Frontend) idea suits here better.

maizuka commented 1 year ago

The first thing that comes to mind from your example is when two sections in a page retrieve data from different WebAPIs. The second is when her two consecutive pages use different parsers.

The former can lead to inconsistent views within the page, similar to one of the HTTP parameter pollution issues. I can't think of any workaround other than using just one parser.

In the latter case, the problem is when different parsers receive a single user input in succession, so one parser must use the other parser's processing results.

tghosth commented 1 year ago

Ok so, my question was:

I wonder if we should refocus this to require that only a single parser is used on the assumption that being sure about parser behaviour in edge-cases is problematic.

maizuka commented 1 year ago

I think so at the moment. I found little materials with discussion to mitigate the risk.

tghosth commented 9 months ago

Following this, https://github.com/OWASP/ASVS/issues/1538#issuecomment-1775118281

I think this requirement still makes sense and actually 13.1.1 should be merged into here.

I think L2 makes more sense.

Opened PR #1841