Closed dmalkr closed 4 years ago
Here we use another encoding which faster according benchmarks and profiling with 4-node set.
Benchmark results before change to improved function:
benchmarking Hash/helpers/nullTerminatedString/short libname / short typename time 149.0 μs (148.4 μs .. 149.8 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 148.9 μs (148.6 μs .. 149.2 μs) std dev 1.114 μs (948.6 ns .. 1.390 μs) benchmarking Hash/helpers/nullTerminatedString/long libname / short typename time 199.8 μs (199.2 μs .. 200.3 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 199.1 μs (198.7 μs .. 199.4 μs) std dev 1.281 μs (1.057 μs .. 1.616 μs) benchmarking Hash/helpers/nullTerminatedString/short libname / long typename time 206.9 μs (206.1 μs .. 207.7 μs) 0.999 R² (0.998 R² .. 1.000 R²) mean 207.4 μs (206.0 μs .. 212.8 μs) std dev 7.586 μs (1.083 μs .. 18.07 μs) variance introduced by outliers: 34% (moderately inflated) benchmarking Hash/helpers/nullTerminatedString/long libname / long typename time 229.1 μs (228.6 μs .. 229.7 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 229.3 μs (229.0 μs .. 229.7 μs) std dev 1.195 μs (875.4 ns .. 1.749 μs)
Benchmark results after change to improved function:
benchmarking Hash/helpers/nullTerminatedString/short libname / short typename time 124.9 μs (124.7 μs .. 125.2 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 125.4 μs (125.2 μs .. 125.7 μs) std dev 854.2 ns (709.5 ns .. 1.028 μs) benchmarking Hash/helpers/nullTerminatedString/long libname / short typename time 133.1 μs (132.9 μs .. 133.3 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 133.0 μs (132.8 μs .. 133.2 μs) std dev 622.6 ns (530.2 ns .. 742.2 ns) benchmarking Hash/helpers/nullTerminatedString/short libname / long typename time 133.7 μs (133.4 μs .. 134.1 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 134.1 μs (133.9 μs .. 134.2 μs) std dev 604.5 ns (498.9 ns .. 735.3 ns) benchmarking Hash/helpers/nullTerminatedString/long libname / long typename time 169.8 μs (169.3 μs .. 170.5 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 169.7 μs (169.5 μs .. 170.0 μs) std dev 786.5 ns (461.0 ns .. 1.213 μs)
Here we use another encoding which faster according benchmarks and profiling with 4-node set.
Benchmark results before change to improved function:
Benchmark results after change to improved function: