Open ee7 opened 3 years ago
I would also be in favor of trying to use the standard library first, and see how far we can take it. I don't mind us writing a bit of verbose code.
@ErikSchierboom updated the top post with some investigation of behavior in edge cases.
We probably want to define "valid JSON" in the spec, and perhaps explicitly forbid duplicate keys.
I'd suggest that jsony or nim-json-serialization might be best in the long-term. But maybe it's better to stick with the current approach until we've implemented all the linting rules, and refactor it later.
I don't know, both jsony
and nim-json-serialization
seem to be maintained by relatively few people and I'd be hesitant to use those libraries instead of the built-in JSON library. Its also telling that so far, no track has actually had this issue with trailing commas, which leads me to believe that it is not that big of a deal.
But it's probably simple to use a modified std/json with stricter parsing.
You mean forking the existing code? Would this be something that you could PR to Nim itself?
But it's probably simple to use a modified
std/json
with stricter parsing.You mean forking the existing code?
I meant that we do some workaround such that import std/json
instead uses our own modified version of lib/pure/json.nim
and/or lib/pure/parsejson.nim
. We can do this by either:
patchFile
in our config.nims
file. This is better because it also affects the local development environment in the same way.Would this be something that you could PR to Nim itself?
Yes, it's possible to add it to Nim itself. But it wouldn't be available until Nim 1.6.0 anyway, which might take a while. (We shouldn't build a configlet release with the devel
Nim compiler).
It would be added an opt-in strict mode, since making parseJson
or parseFile
strict by default would completely break backwards compatibility.
I'd suggest we should do option 2 in the meantime regardless. The main downside is that we wouldn't immediately get upstream bug fixes in the patched files, unless we backport the latest changes manually. But such upstream changes to std/json
would rarely affect us anyway, and backporting should be trivial (given that our diff is probably small) if necessary.
@ee7 I think that sounds like a good plan! 👍
In the meantime, we:
std/json
and std/parsejson
jsony
for the "multi-key" checks in configlet lint
However, I'm still undecided about the best overall design for a refactor (not a high priority). For example, we could:
std/json
. Here's, it's probably best to do a first pass to check the types, then use json.to
to get an object.std/json
, then more complex checks with jsony
. This is the current approach.std/json
, and do all the other checks after deserializing via jsony
to an object.std/json
entirely, and use only jsony
. This is attractive from a performance perspective: we avoid the allocation of some dynamic JsonNode
, which consumes nearly all of the configlet lint
runtime, and instead directly populate an object. Performance isn't my top priority, but this would probably also help make the codebase more robust/readable/maintainable.The latter two options have the downside of increasing our dependence on a non-stdlib package. However: the jsony
source code is pretty short, and the author is a prolific and well-known member of the Nim community, who parses a lot of JSON.
The latter options also give us less control over error messages. For example, if we use jsony
only, the most straightforward implementation means we'll only get one error message for a file that has a type error as well as other problems. And unless jsony
gains a strict mode, we'll have to fork it to disallow at least:
Avoid std/json entirely, and use only jsony. This is attractive from a performance perspective: we avoid the allocation of some dynamic JsonNode, which consumes nearly all of the configlet lint runtime, and instead directly populate an object. Performance isn't my top priority, but this would probably also help make the codebase more robust/readable/maintainable.
I honestly don't care for performance as configlet is already incredibly fast. Robustness/readability/maintainability are much more important for configlet.
The latter options also give us less control over error messages. For example, if we use jsony only, the most straightforward implementation means we'll only get one error message for a file that has a type error as well as other problems.
So if I'm interpreting this correctly, jsony is different from std/json in that it returns an error message if a type mismatch between the JSON content and the type to serialize to occurs? And in that case jsony only returns one (the first?) error? If so, I'd be totally fine with that. We'd be able to remove tons of validation code and type errors should be quite rare.
And unless jsony gains a strict mode, we'll have to fork it to disallow at least:
Would that be a lot of work?
jsony is different from std/json in that it returns an error message if a type mismatch between the JSON content and the type to serialize to occurs?
Yes. We can also fail fast for e.g. seeing a slug that is not kebab-case.
And in that case jsony only returns one (the first?) error?
Yes. Although I imagine it's technically possible to output all the type mismatches - but probably not worth it.
If so, I'd be totally fine with that. We'd be able to remove tons of validation code and type errors should be quite rare.
I was thinking the same. Another advantage is better error messages: we'd find type errors at the time of parsing, and so we still have direct access to line number information.
Would that be a lot of work?
I'd guess/hope that it wouldn't be too bad. It might even be simpler than having a separate first pass that checks the JSON is valid and that key name capitalization is correct. We could also try maintaining a patch, if the diff is small (this is what we do with cligen's parseopt3.nim
currently).
Another advantage is better error messages: we'd find type errors at the time of parsing, and so we still have direct access to line number information.
This is a very important point.
We could also try maintaining a patch, if the diff is small
This has worked out well with parseopt3.nim
. That file is from the standard library though, isn't it? I'm asking, because that file probably changes less than the jsony source code (see its commits).
I've looked at what should be patched:
- Trailing commas (assuming that stays necessary for later parsing by Ruby)
- Different key name capitalization
The first one is still required, as I've just checked it with the latest Ruby version. The second one, what is that about? Is jsony strict about the casing of keys?
This has worked out well with
parseopt3.nim
. That file is from the standard library though, isn't it?
It's from cligen/parseopt3
, which is indeed derived from std/parseopt
.
For us, one of the main differences is that parseopt3
supports separating a short option and its value with a space, like:
configlet sync -e bob
(see https://github.com/exercism/configlet/commit/a897d05cb0b4 for background).
I'm asking, because that file probably changes less than the jsony source code
Yes, jsony
will probably see more churn than parseopt3
. But, in the same way that cligen
receives lots of work that doesn't touch parseopt3
, maybe a patch would touch only relatively stable code (from the below, it only needs to forbid trailing commas for a jsony-only approach).
[checking for trailing commas] is still required, as I've just checked it with the latest Ruby version.
OK - thanks.
Is jsony strict about the casing of keys?
It's not completely strict, but it's stricter than I thought/remembered.
It turns out that the only looseness is "the value of a snake_case JSON key does set the value of a camelCase nim object field".
See the jsony docs, and the relevant jsony code.
I've tried to illustrate below how jsony behaves. Feel free to stare at this:
import pkg/jsony
type
ObjA = object
foo_bar: int
ObjB = object
anotherField: int
func init(T: typedesc[ObjA | ObjB], s: string): T =
fromJson(s, T)
# Summary: jsony is stricter when the object field name is snake_case style.
func main =
block:
let t = ObjA.init """{"foo_bar": 1}"""
doAssert t.foo_bar == 1
# The value of a camelCase JSON key DOES NOT set the value of a corresponding snake_case field.
block:
let t = ObjA.init """{"fooBar": 1}"""
doAssert t.foo_bar == 0 # The default value.
# And other capitalization is also not accepted.
block:
let t = ObjA.init """{"foo_Bar": 1}"""
doAssert t.foo_bar == 0
block:
let t = ObjA.init """{"foobar": 1}"""
doAssert t.foo_bar == 0
# ----------------------------------------------------------------------------
# The value of a snake_case JSON key DOES set the value of a corresponding camelCase field.
block:
let t = ObjB.init """{"another_field": 1}"""
doAssert t.anotherField == 1
block:
let t = ObjB.init """{"anotherField": 1}"""
doAssert t.anotherField == 1
# But other capitalization is not accepted.
block:
let t = ObjB.init """{"another_Field": 1}"""
doAssert t.anotherField == 0
block:
let t = ObjB.init """{"anotherfield": 1}"""
doAssert t.anotherField == 0
main()
Summary: I think that unpatched parsing with jsony alone is sufficient to check JSON key names (and everything except trailing commas), as long as our spec for Exercism JSON files has no uppercase character in any key name, and we do one of these:
parseHook
for our specific objects.I'd suggest the first. Which leaves us doing one of these:
std/json
just to error for a trailing comma.configlet lint
call some non-Nim code, just to error for a trailing comma. For example, we could run find . -name '*.json' -exec jq '.' {} + > /dev/null
when the CI
environment variable exists (or when jq
is installed, which it is in CI). This is simple, but means that a trailing comma may be detected only in CI, and not locally.jq
command in 4. I think this is bad.I think 1 is best, but we can do 2, 3, or 4 as a first implementation if it turns out that 1 is difficult.
There is some subtlety though: if a user runs configlet fmt
when there is a trailing comma, should configlet error or remove it? What about configlet sync
? We could consider being permissive in what we accept, and strict in what we output (robustness principle). So maybe a jsony patch with configurable trailing comma behavior...
I think 1 is best, but we can do 2, 3, or 4 as a first implementation if it turns out that 1 is difficult.
Agreed.
There is some subtlety though: if a user runs configlet fmt when there is a trailing comma, should configlet error or remove it? What about configlet sync? We could consider being permissive in what we accept, and strict in what we output (robustness principle). So maybe a jsony patch with configurable trailing comma behavior...
I wouldn't mind erroring on a trailing commas for configlet sync
or configlet fmt
, as the official spec does not support trailing commas so we would just be following the spec :)
@ErikSchierboom do you have reference link to the "does not support trailing commas"? I did not find mention of trailing commas, so I was not able to figure out if it is simply not mentioned or not supported or disallowed? I checked (searched for the word "trailing") v1.0 and v1.1 as it is here and did not find any mention of this. I might have missed it though.
@kotp That link is a spec for JSON APIs. For the standards for JSON itself, see the railroad diagrams on https://www.json.org:
And:
They're allowed in JavaScript, though. See:
Thanks @ee7. I saw that as well, though still can not find anything about trailing commas being either unsupported or allowed, or even a "should" statement regarding this. The https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Trailing_commas document states that it (JSON) disallows trailing commas but does not show where that information is made known.
So I would vote to not, if it is true that it is disallowed.
Main options:
std/json
1a. The approach so far: parse into aJsonNode
and work only with that. 1b. Parse into aJsonNode
, then unmarshall into some object usingto
. 1c. Plusstd/jsonutils
Araq/packedjson
- keeps everything as a string. Lower memory usage thanstd/json
, and sometimes faster.planetis-m/eminim
- deserializes usingstd/streams
directly to anobject
. Doesn't fully support object variants, but maybe that isn't a problem for us.status-im/nim-json-serialization
- deserializes usingnim-faststreams
directly to anobject
. Probably the most mature third-party option. Currently has a large dependency tree, includingchronos
andbearssl
.treeform/jsony
- deserializes fromstring
directly to anobject
.(Note that
disruptek/jason
is serialization-only).There are also some more obscure ones that I haven't tried, and don't know anything about:
gabbhack/deser
andgabbhack/deser_json
Q-Master/packets
xomachine/NESM
Some of the above are possibly too lenient or require special handling in some edge cases.
Summary:
std/json
std/json
patchedpackedjson
eminim
json_serialization
jsony
For example:
std/json
permits a trailing comma, and comments with//
and/* */
. This is the main reason that it took a while to tick the boxes for "the file must be valid JSON" in https://github.com/exercism/configlet/issues/249. But we now have own patchedstd/json
with stricter parsing.configlet lint
must exit with a non-zero exit code for a trailing comma because the Ruby library that parses it later produces an error for a trailing comma.null
.See also:
I'd suggest that
jsony
ornim-json-serialization
might be best in the long-term. But maybe it's better to stick with the current approach until we've implemented all the linting rules, and refactor it later.One advantage of the current approach is that it's more low-level, which might better ensure that we're "checking the JSON file itself" rather than "checking that each value is valid when parsed with library X".