before:
test read_metadata ... bench: 11,988,380 ns/iter (+/- 1,411,339)
after:
test read_metadata ... bench: 9,600,650 ns/iter (+/- 804,604)
At this point most of the cost is memory allocations and hashmap inserts. You'd need a two-pass parser (one to get all file sizes to pre-allocate large buffers for the variable-length data, another to fill them) and that's far more complexity than I'd want to put effort in.
Now with a proper benchmark:
At this point most of the cost is memory allocations and hashmap inserts. You'd need a two-pass parser (one to get all file sizes to pre-allocate large buffers for the variable-length data, another to fill them) and that's far more complexity than I'd want to put effort in.