miloyip / nativejson-benchmark

C/C++ JSON parser/generator benchmark
MIT License
1.98k stars 262 forks source link

Tests for error messages #23

Open emilk opened 8 years ago

emilk commented 8 years ago

One thing I've noticed is that a lot of JSON parsers do not provide good error messages when you have a parse error or a use error (e.g. accessing a non-existing element in an object). For human-edited JSON files this is very important.

To remedy this I've been working on a JSON parser/writer which main goal is great error messages. I'm thinking about making a PR for nativejson-benchmark in which I add some sort of tests for user friendliness. These test would be a third category (beside benchmarks and conformity). Example tests:

The library facing interface could be something along the lines of "parse this or give an error string" and "return the value at this path or give an error string" (where the path is a list of object keys and/or array indices for a value in a config).

My question is: is there any interest in such tests (from anyone but me, that is)? Or more to the point: If I made such a PR, would it be accepted?

I realize this is outside the main focus of nativejson-benchmark, but it's such a great benchmark with support for so many libraries that I feel it is the natural place to add such a test for user friendliness.

miloyip commented 8 years ago

I think the most difficult part is to define "user friendliness". Currently performance and conformance are some how well-defined. Although some conformance tests are not required by the standards, such as roundtrip.

Returning line/column at the invalid location of JSON seems possible. But there may be some difficulty to define the "invalid location". Some parsers may have different conventions. For example, for a string without terminating ", a parser may report this at the end of JSON, another parser may report at the start of that string.

Some parsers may not throw exception but using other methods to report on some runtime errors.

Actually, I have thought of doing similar things, but in a broader sense. I think a feature comparison table will be useful for people evaluating libraries. User friendliness can be a category in the table. However, this is needed to be done manually, and may incur biases, and a lot of arguments.