Open venkatsvpr opened 2 years ago
Hi @venkatsvpr - thanks for the report. The discrepancies are large enough to warrant some investigation on our end.
Are you able to provide some specific instructions for how you ran your benchmarks? Type of machine, for how long, type of block device, operating system / configuration, etc. Basically enough for us to observe the same discrepancy.
Hi @nicktrav,
I am running the benchmarks like below.
./testbench bench ycsb ./dbfiles/ --engine <engine_name> --workload <workload_type>
The commands are here
I am running this on a ubuntu:18:04 container running on WSL. My laptop has a SSD.
Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 140 Model name: 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz Stepping: 1 CPU MHz: 2995.212 BogoMIPS: 5990.42 Virtualization: VT-x Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 48K L1i cache: 32K L2 cache: 1280K L3 cache: 12288K Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect flush_l1d arch_capabilities
root@e15e7b03dcbe:/stuff/GitDev/pebble/cmd/pebble# lsmem RANGE SIZE STATE REMOVABLE BLOCK 0x0000000000000000-0x00000000f7ffffff 3.9G online yes 0-30 0x0000000100000000-0x00000005ffffffff 20G online yes 32-191
Memory block size: 128M Total online memory: 23.9G Total offline memory: 0B
root@e15e7b03dcbe:/stuff/GitDev/pebble/cmd/pebble# free -m total used free shared buff/cache available Mem: 23893 2939 18089 367 2864 20200 Swap: 6144 0 6144 root@e15e7b03dcbe:/stuff/GitDev/pebble/cmd/pebble#
Happy to provide more info. Thanks!
Thanks @venkatsvpr - we'll take a look.
following
Hi all -- I was wondering if there are any updates on this benchmark? thank you :)
No updates unfortunately. This got deprioritized.
Are you looking for anything in particular?
The benchmark used here does not perform synchronous writes for badger
[1-4], whereas the pebble
test respects the CLI flag for disabling WAL[5-9] (and all WAL writes are synchronous in pebble when a write batch is committed). Changing this benchmark to use the same WAL setting in both benchmarks would be illustrative.
[1] https://github.com/cockroachdb/pebble/compare/master...venkatsvpr:pebble:master#diff-33ef32bf6c23acb95f5902d7097b7a1d5128ca061167ec0716715b0b9eeaa5f6R17 [2] https://github.com/outcaste-io/badger/tree/v3.2202.0 [3] https://github.com/outcaste-io/badger/blob/v3.2202.0/options.go#L135-L158 [4] https://github.com/cockroachdb/pebble/compare/master...venkatsvpr:pebble:master#diff-056c0493a5a390469c794bee4fe9075f5f800246e9855c44990e0aa24faaa68aR26-R30 [5] https://github.com/cockroachdb/pebble/blob/9de3a89ff2bdec45d85b7f969b9ca4d17fa7e474/cmd/pebble/db.go#L53-L59 [6] https://github.com/cockroachdb/pebble/blob/4a3adc32512af946b569920229b6597ae4899f1a/commit.go#L237-L240 [7] https://github.com/cockroachdb/pebble/blob/4a3adc32512af946b569920229b6597ae4899f1a/commit.go#L347-L364 [8] https://github.com/cockroachdb/pebble/blob/5ed983e594da1f186dfac5a082648434d45c2fd3/open.go#L121-L126 [9] https://github.com/cockroachdb/pebble/blob/181258e4edbfd38ff84c9a408f95a0ffc6804bf3/db.go#L819-L841
What is then surprising is that numbers of badger[3] and pebble[8] are so close ... @venkatsvpr can you run your benchmarks again making sure WAL settings are identical as advised by Sean ?
Thanks @sean - Sure let me give it a try and get back.
I am interested in selecting a key-value DB and ran the comparison to understand the performance differences. Didn't expect such big difference between pebble & badger. Am I missing something?
Link to the repo where I ran the benchmarks- https://github.com/venkatsvpr/pebble
Badger (metrics are not wired up completely)
Engine:badger Benchmarkycsb/A/values=1000 160284 16028.1 ops/sec 0 read 0 write 0.00 r-amp 0.00 w-amp Engine:badger Benchmarkycsb/B/values=1000 707112 70709.1 ops/sec 0 read 0 write 0.00 r-amp 0.00 w-amp Engine:badger Benchmarkycsb/C/values=1000 3447390 344717.4 ops/sec 0 read 0 write 0.00 r-amp 0.00 w-amp Engine:badger Benchmarkycsb/D/values=1000 1916511 191648.8 ops/sec 0 read 0 write 0.00 r-amp 0.00 w-amp
Pebble
Engine:pebble Benchmarkycsb/A/values=1000 6820 681.9 ops/sec 0 read 13929144 write 6.45 r-amp 1.00 w-amp Engine:pebble Benchmarkycsb/B/values=1000 66593 6658.7 ops/sec 0 read 13849539 write 6.42 r-amp 1.00 w-amp Engine:pebble Benchmarkycsb/C/values=1000 3820737 382043.9 ops/sec 0 read 10377666 write 6.00 r-amp 1.00 w-amp Engine:pebble Benchmarkycsb/D/values=1000 66757 6675.0 ops/sec 0 read 13926932 write 6.45 r-amp 1.00 w-amp
Thanks!
Jira issue: PEBBLE-132