Closed OR13 closed 1 year ago
Should you validate with a schema before attempting to canonize?
Yes. And this should really be general practice with any inputs so as to avoid hitting extra costs involved with crypto and key resolution, etc. Determine whether the data is worth processing (is sufficiently small, can be parsed, even looks right for the application, etc.) before processing it.
Should you set a depth on graph processing?
Yes, or some other constraint (wall time?) as to be determined or advised by the RCH WG.
Should you validate with a schema before attempting to canonize?
Yes. And this should really be general practice with any inputs so as to avoid hitting extra costs involved with crypto and key resolution, etc.
Strong disagree. By processing it with something as complex as schema validation before cryptographically validating it you are dramatically increasing the surface area for attack vectors from parties that are not the signer. The schema validation and any other logic would have to be as tightly audited as the crypto library.
For example, JOSE/JWT libraries are always included in the audits for the light bit of decoding they do in any security critical deployment (speaking from direct involvement/experience).
By processing it with something as complex as schema validation before cryptographically validating it you are dramatically increasing the surface area for attack vectors from parties that are not the signer.
I agree in general, but note that some additional layering can be acceptable, since cryptographic operations are also expensive.
Typical scenario implementing what I view are best practices:
untrusted user input -> client request -> server validation (schema) -> server verify (crypto) -> server validation (of verified payload).
In the case of JWT, the first validation can be skipped, but obviously still size limits on the string input are enforced.
@quartzjer,
The schema validation and any other logic would have to be as tightly audited as the crypto library.
Your schema validation should be tightly audited. A stolen key or other vulnerability for an external signer that is outside of your control could result in the exploitation of your schema validation software.
Additionally, checking signatures after schema validation may not be easily accomplished in many generalized multi-service / decentralized systems architectures. In some open world systems, doing the crypto checks first may also never add the protections you suggest because there is no allow list of trusted signers at that same layer or component. Again, these same architectures may incur greater costs when asked to perform tasks unnecessarily on invalid data. This itself is an attack. In short, I don't consider it realistic nor good security practice in many modern architectures to rely on running crypto and key resolution checks first.
@OR13,
Typical scenario implementing what I view are best practices:
untrusted user input -> client request -> server validation (schema) -> server verify (crypto) -> server validation (of verified payload).
Agreed.
For reference:
https://cheatsheetseries.owasp.org/cheatsheets/Input_Validation_Cheat_Sheet.html
Input validation should be applied on both syntactical and Semantic level.
It is always recommended to prevent attacks as early as possible in the processing of the user's (attacker's) request. Input validation can be used to detect unauthorized input before it is processed by the application.
Consider that "JSON Schema" processing can introduce attack surface:
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
In my opinion, validation before "verification" should be extremely limited.
We should point to the following section on poisoned graphs: https://www.w3.org/TR/rdf-canon/#dataset-poisoning
The RCH WG is going to include a poison graph example that requires the processor to abort in the test suite to ensure that implementers running against the test suite implement a poison graph aborting mechanism.
We might consider saying that any use of RDF Dataset Canonicalization with Data Integrity MUST ensure that software aborts when poison graphs are detected (either via recursion depth or processing exceeding a certain timeout value).
@dlongley @gkellogg, thoughts on what we should say in the Data Integrity specification on this topic?
I suggest including examples in the test suites, and defining how verifiers should handle this class of error vs other errors.
The RCH WG is going to include a poison graph example that requires the processor to abort in the test suite to ensure that implementers running against the test suite implement a poison graph aborting mechanism.
We haven't decided to actually add an example to the spec, although there is a test. But, it's basically a highly connected clique of 10 nodes, all relating directly to each other.
We might consider saying that any use of RDF Dataset Canonicalization with Data Integrity MUST ensure that software aborts when poison graphs are detected (either via recursion depth or processing exceeding a certain timeout value).
There is an issue to add working to RDF Canon https://github.com/w3c/rdf-canon/issues/120, which I'll act on this week. Basically, it will add normative language to the spec about guarding against unbounded computation, so highlighting that existing requirement should be sufficient.
PR #128 has been raised to address this issue. This issue will be closed once PR #128 is merged.
PR #128 has been merged, closing.
Originally: https://github.com/w3c/vc-jwt/pull/44#issuecomment-1421382603
Should you validate with a schema before attempting to canonize?
Should you set a depth on graph processing?