The main goal of this PR is to de-duplicate data in the chunk store so that fuzzing can continue for a long time without getting OOM-killed.
This required cleaning up the hashing functions to make them work nearly the same as node_equals() because there were several pretty big collision issues that would have broken any de-duplication attempts. I also replaced the "hash set" with a "hash map" so that the pre-existing duplicate nodes could be found efficiently.
There are several more optional changes after the "de-duplicate" commit that are just utilizing the chunk store more effectively and also removing code that is no longer needed after the earlier changes.
The main goal of this PR is to de-duplicate data in the chunk store so that fuzzing can continue for a long time without getting OOM-killed.
This required cleaning up the hashing functions to make them work nearly the same as node_equals() because there were several pretty big collision issues that would have broken any de-duplication attempts. I also replaced the "hash set" with a "hash map" so that the pre-existing duplicate nodes could be found efficiently.
There are several more optional changes after the "de-duplicate" commit that are just utilizing the chunk store more effectively and also removing code that is no longer needed after the earlier changes.