Closed santhosh-tekuri closed 1 year ago
sanity checks failed with RecursionError: maximum recursion depth exceeded in comparison
The spec goes on to say "the behavior is undefined," which means we can't test anything. The tests check that implementation behavior matches what the spec prescribes. Since the spec is non-specific there's nothing to test.
My implementation actually errors rather than provide a validation result, and the test suite doesn't currently have a setup to check for runtime errors.
We can try to come up with some schemas that look like they might lead to infinite loop errors but actually don't, but we can't test erroneous behaviour, because it's undefined (some implementations might produce an error, others might throw an exception of a sort that makes sense for that language), etc.
Here are two schemas that should successfully evaluate with {"foo": 1}
which might trip up a naive infinite loop detector, if implemented with a "have I been to this position in the schema before?" check:
$defs:
mydef:
$id: /properties/foo
properties:
foo:
$ref: /properties/foo
$defs:
int:
type: integer
anyOf:
- additionalProperties:
$ref: '#/$defs/int'
- additionalProperties:
$ref: '#/$defs/int'
we already have such a test, which tests that seems like infinite recursion, but it is not
but any implementation which has not implemented infinite recursion guards will pass such test cases.
getting into infinite loop can be treated as vulnerability in an implementation. being a library author I myself is not aware that there is a possibility of infinite recursion. much later I realised and fixed in my implementation. Since every Implementation looks up for this test suite, having such test case encourages library authors to implement it.
the spec clearly says A schema MUST NOT be run into an infinite loop against an instance
. we could just test that part.
I suggest one of the following changes to test suite to accommodate this test case:
valid
to take either boolean
or string undefined
. when "valid": "undefined"
one could just validate against data
and ignore the result.valid
property optional. if valid
property is missing, one could just validate against data
and ignore the result Changing the test suite like that is actually pretty impactful. We have a LOT of consumers. Adding a test that's expected to fail would require that everyone update their test runners.
It's not out of the realm of possibility, but we need to be careful about it. If we do this wrong, it'll break consumers' builds. It's something that we've looked at doing before and decided against.
Also (and again), we can't test undefined behavior.
ok. i understand. you may close the PR.
example from https://datatracker.ietf.org/doc/html/draft-bhutton-json-schema-01#section-9.4.1
the above section says:
test passed with impl https://github.com/santhosh-tekuri/boon
if ok, will add to remaining drafts