When we load a json file, the keys get alphabetized and duplicates get deleted. The deduplication is especially bad because if multiple entries have holes instead of keys, they will be treated as duplicates.
I don't think serde lets you configure that behavior, so we may need to provide our own deserialization implementation for serde, or switch to a different parser.
When we load a json file, the keys get alphabetized and duplicates get deleted. The deduplication is especially bad because if multiple entries have holes instead of keys, they will be treated as duplicates.
I don't think serde lets you configure that behavior, so we may need to provide our own deserialization implementation for serde, or switch to a different parser.