Closed edmund-huber closed 9 years ago
This was one of my original goals when adapting the json-schema code that formed the foundation, but unfortunately the nested nature of things has made this more difficult than I'd anticipated. It is definitely still on the agenda, but I'm afraid I haven't had time to consider the best way to implement this (probably a mode of some sort that allowed parsing to continue after an error and appending all errors to a master list)
Here is an incomplete diff for an approach to this issue, using yield
to collect FieldValidationError
s, while keeping SchemaError
s raise
d.
I have make only one part of the tests (the value tests) work so far but I'm confident I can get the rest to work too. The module-as-main is also broken but is easily fixed.
Is this the kind of thing you had in mind?
N.B.: SchemaErrors don't necessarily manifest, unless you exhaust the validation errors generator. Feature? Bug?
this is an interesting approach, as long as it doesn't cause any regressions I'd be glad to see it proceed
Hello! Here is the complete diff for the approach. All tests pass, and module-as-main works. Ah, I forgot to amend the README.
The main concern I have is that this change would be a complete change in how the library should be used -- as I said, if you don't exhaust the generator you may miss out on errors (both validation errors and schema errors). So as this regards the README, something like a simple validictory.validate(..)
actually doesn't do anything, but list(validictory.validate(..))
does.
However, a change to returning lists of errors was going to be significantly more work and probably would detract from the readability of the source.
Instead of overwriting validate, a nice solution for this could be to have a parameter which triggers that behavior or another method.
The jsonschema package offers "iter_errors" which provides a similar feature: http://python-jsonschema.readthedocs.org/en/latest/validate/#jsonschema.IValidator.iter_errors
any future plans for this feature?
I think what it'd take for a merge would be something like a @dmr's suggestion- a new method. Changing how validate() works to essentially be a no-op isn't something I'm willing to do as it'd break a ton of code that's out there.
I am choosing a validator method today for a new project. I could convert all my data to json or xml and validate with json schema or xsd. But the data will be generated by python and consumed by python so I might as well keep it as python data.
I'm interested in this project because the schema model is similar to jsonschema. But I really need to list all validation errors at once (this ticket) and their locations (could be in json pointer format) ticket #56
do you have any thoughts on this? Also it's a neat idea to use json schema to validate python.. I wonder if you'd support $ref so I could use the same schema to validate json or python data (without having to convert from one to another).
I know, that's not as pythonic as referencing dicts in code, but its handy.
We'd certainly be open to merging something similar to what's been written but it'd need to be backwards-compatible, so probably a new method or a flag.
@edmund-huber What do you think about the API with "iter_errors" as suggested above? I'd like to have and use this feature and this issue is open for too long by now...
I've committed an approach to this, if you pass fail_fast=False you'll now get back a MultipleValidationError which has an errors attribute
I'm going to revise the API a bit between now and release, but feedback against dev welcome
Are there any release plans yet?
I'll take a look at getting a release put together in the next week or so.
On Tue, Jul 1, 2014 at 3:17 AM, Daniel Rech notifications@github.com wrote:
Are there any release plans yet?
— Reply to this email directly or view it on GitHub https://github.com/sunlightlabs/validictory/issues/55#issuecomment-47624328 .
Thanks!
1.0.0a1 is out now, feedback very much appreciated!
I like the feature and it works for me. 2 month without complaints --> release 1.0.0?
I like this library!
But, it would really be nice if I could see all of the applicable errors in the input, rather than only the first.