-
Parsing failes when json contains any comments.
Error output:
"maybe a (non-standard) comment? (not recognized as one since Feature 'ALLOW_COMMENTS' not enabled for parser)"
-
There isn't a fixed spot in tests right now which indicates which portion, section, or text of the specification it covers. The suite's main purpose is of course to faithfully represent the specificat…
-
[This sanity check](https://github.com/dmeranda/demjson/blob/5bc65974e7141746acc88c581f5d2dfb8ea14064/demjson.py#L4946-L4962) keeps demjson from correctly parsing `{あ:2}`. When I simply disable it, th…
-
### Version Information
v2.37.0
OSS
### What is the current behaviour?
Jsonb columns in postgres with big int values are being rounded. eg `750727362669608961` becomes `750727362669609000` on …
-
The test `test_parsing/n_string_unicode_CapitalU.json` considers that `"\UA66D"` should fail to parse.
But I am confused by this because the spec says that any character may be escaped, hence it's…
-
The validator should ignore the UTF-8 BOM at the start of JSON files to avoid crashing on parsing. Looks like this affects any JSON file but the specific case found on OpenNuero was for dataset_descri…
-
Is it expected behavior during deserialization json to Dictionary?
Another serializers such as JSON.NET and etc deserialize json int values to long by default (if type variable is object)
```
[T…
-
I tested Yojson on the test cases of https://github.com/nst/JSONTestSuite:
```
CRASH n_array_extra_comma.json
CRASH n_array_incomplete_invalid_value.json
CRASH n_array_number_and_comma.json
CRA…
-
**Describe the bug**
Sending an HTTP request with `Content-Type: application/json;charset=UTF-8` doesn't encode the body as UTF-8.
**To Reproduce**
Steps to reproduce the behavior:
1. Send a req…
-
#### Bug Report Checklist
- [x] Have you provided a full/minimal spec to reproduce the issue?
- [x] Have you validated the input using an OpenAPI validator ([example](https://apidevtools.org/swagg…