Closed greenpdx closed 2 days ago
I'm afraid in that use case you're almost certainly benchmarking your disk read speed and not how fast your CPU can hash the data. And in that case (reading from disk, not cache) update_mmap_rayon
is probably hurting you more than helping you, because it issues reads in a "random" order instead of straight front-to-back of the file.
I just did a blake3 hash of over 16,000 movies file and databased them into a sqlite database. size: 2270677986839 time: 54555.376517945s
2.3 TB in about 15 hours Mb / sec 41.6
async fn file_hash(pth: &Path) -> Result<String, Box> {
let mut hasher = Hasher::new();
}