FuzzrNet / Fuzzr

P2P platform for publishing content, self-hosting, decentralized curation, and more.
https://fuzzr.net
The Unlicense
61 stars 8 forks source link

perf: basic criterion benchmarks for store and load of text #77

Closed goller closed 3 years ago

goller commented 3 years ago

I had to move src/main.rs into src/bin/fuzzr.rs as benches cannot import the lib of a binary.

Here is an example report:

Benchmarking load_text_throughput/1024_bytes
Benchmarking load_text_throughput/1024_bytes: Warming up for 3.0000 s
Benchmarking load_text_throughput/1024_bytes: Collecting 100 samples in estimated 5.0522 s (252k iterations)
Benchmarking load_text_throughput/1024_bytes: Analyzing
load_text_throughput/1024_bytes
                        time:   [19.818 us 20.226 us 20.698 us]
                        thrpt:  [47.181 MiB/s 48.282 MiB/s 49.277 MiB/s]
                 change:
                        time:   [-4.0016% -1.3089% +1.4003%] (p = 0.33 > 0.05)
                        thrpt:  [-1.3809% +1.3262% +4.1684%]
                        No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
  1 (1.00%) high mild
  1 (1.00%) high severe
Benchmarking load_text_throughput/16384_bytes
Benchmarking load_text_throughput/16384_bytes: Warming up for 3.0000 s
Benchmarking load_text_throughput/16384_bytes: Collecting 100 samples in estimated 5.0427 s (237k iterations)
Benchmarking load_text_throughput/16384_bytes: Analyzing
load_text_throughput/16384_bytes
                        time:   [21.155 us 21.456 us 21.830 us]
                        thrpt:  [715.76 MiB/s 728.22 MiB/s 738.58 MiB/s]
                 change:
                        time:   [-1.4959% +0.2366% +1.8887%] (p = 0.78 > 0.05)
                        thrpt:  [-1.8537% -0.2360% +1.5186%]
                        No change in performance detected.
Found 14 outliers among 100 measurements (14.00%)
  14 (14.00%) high severe
Benchmarking load_text_throughput/262144_bytes
Benchmarking load_text_throughput/262144_bytes: Warming up for 3.0000 s
Benchmarking load_text_throughput/262144_bytes: Collecting 100 samples in estimated 5.2214 s (91k iterations)
Benchmarking load_text_throughput/262144_bytes: Analyzing
load_text_throughput/262144_bytes
                        time:   [56.666 us 57.265 us 58.031 us]
                        thrpt:  [4.2071 GiB/s 4.2633 GiB/s 4.3085 GiB/s]
                 change:
                        time:   [-1.7947% +0.5167% +3.1932%] (p = 0.71 > 0.05)
                        thrpt:  [-3.0944% -0.5141% +1.8275%]
                        No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
  2 (2.00%) high mild
  9 (9.00%) high severe
Benchmarking load_text_throughput/1024000_bytes
Benchmarking load_text_throughput/1024000_bytes: Warming up for 3.0000 s
Benchmarking load_text_throughput/1024000_bytes: Collecting 100 samples in estimated 5.1192 s (25k iterations)
Benchmarking load_text_throughput/1024000_bytes: Analyzing
load_text_throughput/1024000_bytes
                        time:   [188.78 us 191.93 us 195.37 us]
                        thrpt:  [4.8815 GiB/s 4.9689 GiB/s 5.0519 GiB/s]
                 change:
                        time:   [-2.9584% +0.4243% +4.3536%] (p = 0.82 > 0.05)
                        thrpt:  [-4.1720% -0.4225% +3.0486%]
                        No change in performance detected.
Found 7 outliers among 100 measurements (7.00%)
  4 (4.00%) high mild
  3 (3.00%) high severe

Benchmarking store_text_throughput/1024_bytes
Benchmarking store_text_throughput/1024_bytes: Warming up for 3.0000 s
Benchmarking store_text_throughput/1024_bytes: Collecting 100 samples in estimated 5.2461 s (66k iterations)
Benchmarking store_text_throughput/1024_bytes: Analyzing
store_text_throughput/1024_bytes
                        time:   [77.428 us 78.358 us 79.522 us]
                        thrpt:  [12.280 MiB/s 12.463 MiB/s 12.613 MiB/s]
                 change:
                        time:   [-3.7098% -1.7624% +0.2276%] (p = 0.08 > 0.05)
                        thrpt:  [-0.2271% +1.7941% +3.8528%]
                        No change in performance detected.
Found 13 outliers among 100 measurements (13.00%)
  1 (1.00%) low mild
  1 (1.00%) high mild
  11 (11.00%) high severe
Benchmarking store_text_throughput/16384_bytes
Benchmarking store_text_throughput/16384_bytes: Warming up for 3.0000 s
Benchmarking store_text_throughput/16384_bytes: Collecting 100 samples in estimated 5.4411 s (61k iterations)
Benchmarking store_text_throughput/16384_bytes: Analyzing
store_text_throughput/16384_bytes
                        time:   [92.245 us 94.675 us 97.175 us]
                        thrpt:  [160.79 MiB/s 165.04 MiB/s 169.39 MiB/s]
                 change:
                        time:   [+2.0510% +5.4291% +9.5546%] (p = 0.00 < 0.05)
                        thrpt:  [-8.7213% -5.1495% -2.0098%]
                        Performance has regressed.
Found 23 outliers among 100 measurements (23.00%)
  10 (10.00%) high mild
  13 (13.00%) high severe
Benchmarking store_text_throughput/262144_bytes
Benchmarking store_text_throughput/262144_bytes: Warming up for 3.0000 s
Benchmarking store_text_throughput/262144_bytes: Collecting 100 samples in estimated 5.4979 s (20k iterations)
Benchmarking store_text_throughput/262144_bytes: Analyzing
store_text_throughput/262144_bytes
                        time:   [274.98 us 281.16 us 288.82 us]
                        thrpt:  [865.58 MiB/s 889.17 MiB/s 909.15 MiB/s]
                 change:
                        time:   [+3.9246% +6.7403% +9.6361%] (p = 0.00 < 0.05)
                        thrpt:  [-8.7891% -6.3147% -3.7764%]
                        Performance has regressed.
Found 3 outliers among 100 measurements (3.00%)
  3 (3.00%) high mild
Benchmarking store_text_throughput/1024000_bytes
Benchmarking store_text_throughput/1024000_bytes: Warming up for 3.0000 s
Benchmarking store_text_throughput/1024000_bytes: Collecting 100 samples in estimated 8.9799 s (10k iterations)
Benchmarking store_text_throughput/1024000_bytes: Analyzing
store_text_throughput/1024000_bytes
                        time:   [831.94 us 853.01 us 877.59 us]
                        thrpt:  [1.0867 GiB/s 1.1180 GiB/s 1.1463 GiB/s]
                 change:
                        time:   [+2.4954% +6.3998% +10.654%] (p = 0.00 < 0.05)
                        thrpt:  [-9.6281% -6.0148% -2.4347%]
                        Performance has regressed.
Found 6 outliers among 100 measurements (6.00%)
  5 (5.00%) high mild
  1 (1.00%) high severe

I have yet to add image benchmarks; I'm not sure how to generate a variety of sizes yet.

Signed-off-by: Chris Goller goller@gmail.com

cryptoquick commented 3 years ago

Great work also! I'm fine with moving the binary file, and I'll try to make sure everything works well when I do a release. I might have to add a [[bin]] section to the Cargo.toml.

Another issue with forking your changes in, as much as I love the model, is that github doesn't run actions over them. Can you push both of your changes to the repo itself? Also, maybe join our Discord. I should put a link up in the README to our Matrix server, also.