Closed gregsdennis closed 1 year ago
I've added $id
s to all of the output validation schemas. This provides a base URI for the $ref
to use.
I also implemented this new suite in my lib (PR linked ☝️) and I'm happy to report that I fail the 2019 & 2020 cases! 🤦
I now pass most of the 2019/2020 cases. There's one test in each that I can't pass because my approach for draft-next
doesn't always track the keywords in passing subschemas, so I can't correctly report the full keyword locations. It's fine though: I just mark those tests as inconclusive as I've just decided I'm not going to fully support those drafts.
I think we should likely merge these! I've only given the content of the tests themselves a few skims but they seemed nice as a starting point, may as well get them in and iterate I think, but would possibly give slightly more visibility if they were merged in even if we needed to make a tweak or two once others tried running them.
So yeah thumbs up from me for merging when you're comfortable, or if you're planning on hearing from others obviously fine with me too.
I'm fine with merging as a starting point. Forgive me for not re-reading the readme changes, but does it say somewhere that people shouldn't rely on these test yet because they are still being developed and may contain errors?
Probably a very good idea especially considering the last conversation.
Begins to address #247
This is a go at creating some output tests. It includes 2019-09/2020-12 and draft-next tests for:
type
creates a proper node, and may include a message if it failsreadOnly
generates an annotation of its valueHat's off to @karenetheridge who proposed using a schema to validate the output. This creates a really nice way to target the bit of the output that a test is focused on without requiring an explicit error message or being reliant on specific output unit sequencing.
I still have structural tests planned, but I figured this would be enough to start.
I've also included a README for the folder.