Closed benhamill closed 9 years ago
Hey @benhamill!
Yes ramlfications does some validation (it's not 100% to the raml spec). You can turn it off globally in the config file (I'm on mobile but there's an example in the tests).
But I have actually been thinking about the printing of all validation errors at once, like linters do. I think it's much more useful to a dev and it's good to hear someone else thinking about it.
Do you think that constructing a RAML file via python then "dumping" (like Jason.dump/s) would be helpful to you and your team? With validation fitting in at some point?
I'd say we are very unlikely to want to build our spec in Python and dump it. We may be dealing with specs that come out of other tools and will probably be hand-writing them, as well. So our interest is in making sure the hand-written stuff is valid.
I was thinking that, after reading in the spec with the YAML library, during the pass that makes more... intelligent business objects out of that result, it would be good to hook in validation for each node in the tree. And that if a problem is encountered, you can store it and then basically stop parsing (and also validating) that node's children and just move on to the next sibling node. Like... once you hit a problem, you likely can't trust that your understanding or expectations about the node you're on (and, consequently the nodes below the one you're on) even apply, so it seems safe to bail. Does this seems reasonable?
Also, from an interface perspective, I was thinking something like:
r = ramlfications.parse(path)
r.valid # => False
r.errors # => A list of strings, describing the problems?
Or maybe the errors should be full objects so they can have a description and a line number and file name, etc. Not sure.
Let me think on this for a day. This makes sense to me but just want to put some thought on it.
So the current validate functionality (ramlfications.validate(path)
) doesn't work well for you in your work flow? I mean I can see how it wouldn't in what you proposed. (Hmm seems like I'm talking thru my thoughts right now :) )
Thanks!
Whaaaaaaaaat. I didn't even know that was a thing! Witness my superior investigative skills! Haha. I'll check it out and report back.
So, yeah. This looks...aaaaaaalmost like what I'd want. But it relies on exceptions? Or is that just how you happen to be using the validation stuff from attrs? Could it do something different when it sees something fishy? Also, there's some state tracking: In particular, with no default media type defined at the root of a spec, each body
for a response needs to have that as a key. Right now, ramlfications treats that as OK if it's missing, whether there's a default or not. Some stuff like that I think we'd want to get more rigorous about.
Think about it over the weekend. I'll check back Monday unless you post here. Thanks, again!
Ping. Later today, a co-worker and I will likely start looking at what an implementation of our dream validation situation would look like. If you have time to give some thoughts in that time frame, I'd love to hear them. If not, no worries and probably we'll have some real code to discuss soonish.
Also, we may make integrating jsonschema part of that, since it already has validation built in in a robust way. Probably we'll emulate the interface it presents for validation, as well.
Hey Ben - sorry I have been on vacation (still am).
I'd love to hear your dream validation setup; I'd be willing to integrate it in if it seems reasonable. I'd prefer to have a practical validation and if that means breaking backwards compatibility, that'd be fine. So I'd be very interested in knowing what works for you at UA.
Re jsonschema - I am actually thinking of integrating that (or another json library) in for json validation (Im on mobile but it is referenced in an issue).
Since you're asking for this, I think it's feasible to start this next week on improving validation when I return from vacation.
Some other stuff popped up (at work?!? What?!?), so strike my "soonish". We'll check back in when we get here. Probably #19 will get fixed up first (it's got bugs blocking some other stuff we're trying to do).
I don't think the validation needs to be more complex than:
errors = ramlfications.validate(path)
errors # => List of dictionaries
----
config = ramlfications.setup_config()
config['validate'] = True
r = ramlfications.parse(path, config) # Raises validation exception or sets r.errors
Thoughts?
@cerivera yeah I agree - the only thing that needs to be implemented is to collect the errors.
@econchick I'll help with that, and also with bringing more of the RAML spec into your validators.
@cerivera wow that'd be awesome! and very much appreciated, thank you!
:+1:
And thanks to @benhamill for getting the ball rolling on this.
Closing this because we're all awesome!
We've been liking using ramlfications a lot, so far, on my team at UA. One thing we're realizing we want as we spec more of our public API is an automated check that the RAML we're writing is valid (not whether the endpoint we're specifying fulfills the spec).
My first thought was to use ramlfications to read in the spec and then write a validator on top of it. But the more I messed with that, the more I noticed that ramlfications does at least some validation of it's own. Is this a solid interprtetation of the code? I'm looking at places like
resp_body
, which will explode if the body isn't defined correctly.I was thinking: should validation be built into ramlfications its self? If not, could we somehow make ramlfications more permissive so that validation can happen after parsing (dubious)? In particular, I'm interested in doing "lazy" validation so that a validator can report all errors at once, rather than just the first one it runs into.
What are your thoughts on this? What else should we talk about in order to figure out a good way forward? I'll go ahead and start thinking about what this might look like from an interface perspective in the meantime.
Thanks for all your time on this project! :heartpulse: