It is possible to trigger an out of boundary read in compare_keys while
dumping json with JSON_SORT_KEYS due to a signed integer usage.
If a key is longer than 2 GB then len in key_len turns negative. The
code uses the smaller value for memory comparison. Since a negative int
becomes a huge size_t value, the memcmp call eventually triggers an out
of boundary read.
Proof of Concept:
Create a json file with two keys, one being larger than 2 GB:
Coverage decreased (-0.2%) to 96.08% when pulling 9095b2276c6bd76c2f354d67e4927ccc4fef36ae on stoeckmann:keys into 128e9c5f376ac425f13d3b1cf9a38d53e544ad23 on akheron:master.
It is possible to trigger an out of boundary read in compare_keys while dumping json with JSON_SORT_KEYS due to a signed integer usage.
If a key is longer than 2 GB then len in key_len turns negative. The code uses the smaller value for memory comparison. Since a negative int becomes a huge size_t value, the memcmp call eventually triggers an out of boundary read.
Proof of Concept:
echo -n '{"' > header.json dd if=/dev/zero bs=1024 count=2097153 | tr '\0' 'a' > poc.json dd if=header.json of=poc.json conv=notrunc echo -n '":"a","a":""}' >> poc.json
Without this patch, an out of boundary read occurs.
Signed-off-by: Tobias Stoeckmann tobias@stoeckmann.org