Closed dmr closed 7 years ago
I'd be interested in the results if you could post them, particularly the errors there.
I'm not super familiar with it, but I'd imagine it isn't something we'd want to support 100% as the current JSON schema draft has some messy spots and some things like $ref that just don't make sense for the architecture of validictory.
That said, a better list of what is/isn't supported would be great for the documentation I was hoping to write soon
This is the code is used: run_jsonschema_test_suite.py
"""
Test runner for the JSON Schema official test suite
Tests comprehensive correctness of each draft's validator.
See https://github.com/json-schema/JSON-Schema-Test-Suite for details.
"""
import glob
import json
import io
import itertools
import os
import re
import subprocess
import sys
try:
from sys import pypy_version_info
except ImportError:
pypy_version_info = None
if sys.maxunicode == 2 ** 16 - 1: # This is a narrow build.
def narrow_unicode_build(case, test):
if "supplementary Unicode" in test["description"]:
return "Not running surrogate Unicode case, this Python is narrow."
else:
def narrow_unicode_build(case, test): # This isn't, skip nothing.
return
PY3 = sys.version_info[0] == 3
REPO_ROOT = os.path.abspath(os.path.dirname(__file__))
SUITE = os.getenv("JSON_SCHEMA_TEST_SUITE", os.path.join(REPO_ROOT, "JSON-Schema-Test-Suite"))
TESTS_DIR = os.path.join(SUITE, "tests")
import unittest
from unittest import TestCase
from validictory import validate, FieldValidationError
# $ref preprocessing
def dict_walk(node, action, match='$ref'):
if len(node.keys()) == 1 and match in node:
return action(node[match])
else:
newdict = {}
for key, value in node.items():
if isinstance(value, dict):
value = dict_walk(node=value, action=action, match=match)
if isinstance(value, list):
value = [
dict_walk(node=entry, action=action, match=match)
if isinstance(entry, dict) else entry
for entry in value
]
newdict[key] = value
return newdict
def get_ref_path_for_ref_url(url):
if not url.startswith('#/'):
raise ValueError('Only local references allowed')
return url.lstrip('#/').split('/')
def get_ref_definition(schema, matched_value):
ref_path = get_ref_path_for_ref_url(matched_value)
# traverse path down or raise exception
found_definition = schema
for component in ref_path:
found_definition = found_definition[component]
return found_definition
def validictory_preprocess_ref(schema):
replace_ref_with_definition = lambda matched_value: get_ref_definition(schema, matched_value=matched_value)
return dict_walk(
node=schema,
action=replace_ref_with_definition
)
# $ref preprocessing until here
def make_case(schema, data, valid, name):
if valid:
def test_case(self):
# kwargs = getattr(self, "validator_kwargs", {})
print('valid', data, schema)
validate(data, validictory_preprocess_ref(schema), required_by_default=False) # , cls=self.validator_class, **kwargs)
else:
def test_case(self):
# kwargs = getattr(self, "validator_kwargs", {})
print('invalid', data, schema)
with self.assertRaises(FieldValidationError):
validate(data, validictory_preprocess_ref(schema), required_by_default=False) # , cls=self.validator_class, **kwargs)
if not PY3:
name = name.encode("utf-8")
test_case.__name__ = name
return test_case
def maybe_skip(skip, test_case, case, test):
if skip is not None:
reason = skip(case, test)
if reason is not None:
test_case = unittest.skip(reason)(test_case)
return test_case
def load_json_cases(tests_glob, ignore_glob="", basedir=TESTS_DIR, skip=None):
if ignore_glob:
ignore_glob = os.path.join(basedir, ignore_glob)
def add_test_methods(test_class):
ignored = set(glob.iglob(ignore_glob))
for filename in glob.iglob(os.path.join(basedir, tests_glob)):
if filename in ignored:
continue
validating, _ = os.path.splitext(os.path.basename(filename))
id = itertools.count(1)
with open(filename) as test_file:
for case in json.load(test_file):
for test in case["tests"]:
name = "test_%s_%s_%s" % (
validating,
next(id),
re.sub(r"[\W ]+", "_", test["description"]),
)
test_case = make_case(
data=test["data"],
schema=case["schema"],
valid=test["valid"],
name=name,
)
test_case = maybe_skip(skip, test_case, case, test)
assert not hasattr(test_class, name), name
test_class[name] = test_case
return test_class
return add_test_methods
class CollectTestsMeta(type):
def __new__(cls, name, bases, attrs):
fn = load_json_cases(
tests_glob="draft4/*.json",
skip=narrow_unicode_build,
ignore_glob="draft4/refRemote.json",
)
ret = fn(test_class=attrs)
return type.__new__(cls, name, bases, attrs)
class TestJsonSchemaDraft4(TestCase, metaclass=CollectTestsMeta):
pass
if __name__ == '__main__':
unittest.main()
I basically copied the "load_json_cases" methods from the jsonschema project https://github.com/Julian/jsonschema/blob/master/jsonschema/tests/test_jsonschema_test_suite.py, they did all the work, I just built a metaclass around it.
To run this you have to checkout the test suite and run the script:
$ nosetests run_jsonschema_test_suite.py --with-specplugin
what package includes --with-specplugin? I can't seem to find it. without it I'm getting
(val3)~/code/libs/validictory(master|…)$ nosetests testsuite.py
Ran 0 tests in 0.000s
ah nevermind, ignore last comment- I got it working when I looked at the checkout directory more closely
hmm. it doesn't look like it is treating FieldValidationError as a proper exception which is the cause of most of the errors. might need to mess with this more
--with-specplugin is an option for nose to enaboe the spec plugin (pip install spec, https://github.com/bitprophet/spec)
You can also run it using pure python or use pytest
python run_jsonschema_test_suite.py
py.test run_jsonschema_test_suite.py
I adjusted the script above: line 106 should listen for FieldValidationError instead of AssertionArror. But still:
Ran 245 tests in 0.104 seconds
FAILED (failures=37, errors=23, skipped=0)
looks like a lot of these are oneOf, and dependencies
I'd be open to support for those if there's demand
I'd like to see support for dependencies.
I ran the dependency tests. 7 out of 16 fail:
nosetests run_jsonschema_test_suite.py:TestDraft4Dependencies
FF..FFF...F....F
======================================================================
FAIL: test_dependencies_10_missing_other_dependency (run_jsonschema_test_suite.TestDraft4Dependencies)
----------------------------------------------------------------------
Traceback (most recent call last):
File "validictory/run_jsonschema_test_suite.py", line 83, in test_case
validate(data, schema2, required_by_default=False)
AssertionError: FieldValidationError not raised
-------------------- >> begin captured stdout << ---------------------
data {u'quux': 2, u'bar': 1}
schema {u'dependencies': {u'quux': [u'foo', u'bar']}}
should be invalid
--------------------- >> end captured stdout << ----------------------
======================================================================
FAIL: test_dependencies_11_missing_both_dependencies (run_jsonschema_test_suite.TestDraft4Dependencies)
----------------------------------------------------------------------
Traceback (most recent call last):
File "validictory/run_jsonschema_test_suite.py", line 83, in test_case
validate(data, schema2, required_by_default=False)
AssertionError: FieldValidationError not raised
-------------------- >> begin captured stdout << ---------------------
data {u'quux': 1}
schema {u'dependencies': {u'quux': [u'foo', u'bar']}}
should be invalid
--------------------- >> end captured stdout << ----------------------
======================================================================
FAIL: test_dependencies_14_wrong_type (run_jsonschema_test_suite.TestDraft4Dependencies)
----------------------------------------------------------------------
Traceback (most recent call last):
File "validictory/run_jsonschema_test_suite.py", line 83, in test_case
validate(data, schema2, required_by_default=False)
AssertionError: FieldValidationError not raised
-------------------- >> begin captured stdout << ---------------------
data {u'foo': u'quux', u'bar': 2}
schema {u'dependencies': {u'bar': {u'properties': {u'foo': {u'type': u'integer'}, u'bar': {u'type': u'integer'}}}}}
should be invalid
--------------------- >> end captured stdout << ----------------------
======================================================================
FAIL: test_dependencies_15_wrong_type_other (run_jsonschema_test_suite.TestDraft4Dependencies)
----------------------------------------------------------------------
Traceback (most recent call last):
File "validictory/run_jsonschema_test_suite.py", line 83, in test_case
validate(data, schema2, required_by_default=False)
AssertionError: FieldValidationError not raised
-------------------- >> begin captured stdout << ---------------------
data {u'foo': 2, u'bar': u'quux'}
schema {u'dependencies': {u'bar': {u'properties': {u'foo': {u'type': u'integer'}, u'bar': {u'type': u'integer'}}}}}
should be invalid
--------------------- >> end captured stdout << ----------------------
======================================================================
FAIL: test_dependencies_16_wrong_type_both (run_jsonschema_test_suite.TestDraft4Dependencies)
----------------------------------------------------------------------
Traceback (most recent call last):
File "validictory/run_jsonschema_test_suite.py", line 83, in test_case
validate(data, schema2, required_by_default=False)
AssertionError: FieldValidationError not raised
-------------------- >> begin captured stdout << ---------------------
data {u'foo': u'quux', u'bar': u'quux'}
schema {u'dependencies': {u'bar': {u'properties': {u'foo': {u'type': u'integer'}, u'bar': {u'type': u'integer'}}}}}
should be invalid
--------------------- >> end captured stdout << ----------------------
======================================================================
FAIL: test_dependencies_4_missing_dependency (run_jsonschema_test_suite.TestDraft4Dependencies)
----------------------------------------------------------------------
Traceback (most recent call last):
File "validictory/run_jsonschema_test_suite.py", line 83, in test_case
validate(data, schema2, required_by_default=False)
AssertionError: FieldValidationError not raised
-------------------- >> begin captured stdout << ---------------------
data {u'bar': 2}
schema {u'dependencies': {u'bar': [u'foo']}}
should be invalid
--------------------- >> end captured stdout << ----------------------
======================================================================
FAIL: test_dependencies_9_missing_dependency (run_jsonschema_test_suite.TestDraft4Dependencies)
----------------------------------------------------------------------
Traceback (most recent call last):
File "validictory/run_jsonschema_test_suite.py", line 83, in test_case
validate(data, schema2, required_by_default=False)
AssertionError: FieldValidationError not raised
-------------------- >> begin captured stdout << ---------------------
data {u'quux': 2, u'foo': 1}
schema {u'dependencies': {u'quux': [u'foo', u'bar']}}
should be invalid
--------------------- >> end captured stdout << ----------------------
----------------------------------------------------------------------
Ran 16 tests in 0.004s
FAILED (failures=7)
You can run these tests on your computer by using our branch: https://github.com/netsyno/validictory
closed in light of #114
I quickly ran the "JSON-Schema-Test-Suite" that apparently many projects are using: https://github.com/json-schema/JSON-Schema-Test-Suite:
I can share the results and commit my code if anybody is interested. Most cases fail for good reasons I think.
What are your opinions about a test suite like this?
I think it would be good to know which features from the specification are supported and which are not.