When synless loads a json file, the serde json parser:
alphabetizes the keys
if multiple entries have the same key, it deletes all but one of the entries
Loading and saving a file should modify it as little as possible. The deduplication is especially bad because if multiple entries have holes instead of keys, they will be treated as duplicates.
We'll probably need to stop using serde json, or provide our own deserialization implementation for serde.
When synless loads a json file, the serde json parser:
Loading and saving a file should modify it as little as possible. The deduplication is especially bad because if multiple entries have holes instead of keys, they will be treated as duplicates.
We'll probably need to stop using serde json, or provide our own deserialization implementation for serde.