Closed eddyashton closed 4 years ago
Merging #461 into master will decrease coverage by
0.21%
. The diff coverage isn/a
.
@@ Coverage Diff @@
## master #461 +/- ##
==========================================
- Coverage 82.7% 82.49% -0.21%
==========================================
Files 118 118
Lines 9519 9519
==========================================
- Hits 7872 7852 -20
- Misses 1647 1667 +20
Flag | Coverage Δ | |
---|---|---|
#e2e_BFT | 53.2% <ø> (-0.22%) |
:arrow_down: |
#e2e_CFT | 70.36% <ø> (-1.42%) |
:arrow_down: |
#unit_BFT | 65.09% <ø> (-0.02%) |
:arrow_down: |
#unit_CFT | 71.32% <ø> (+0.01%) |
:arrow_up: |
Impacted Files | Coverage Δ | |
---|---|---|
src/node/rpc/memberfrontend.h | 79.54% <0%> (-4.03%) |
:arrow_down: |
src/apps/luageneric/luageneric.cpp | 81.58% <0%> (-3.95%) |
:arrow_down: |
src/luainterp/luajson.h | 92.11% <0%> (-1.75%) |
:arrow_down: |
src/ds/ringbuffer.h | 92.35% <0%> (-0.55%) |
:arrow_down: |
The csv files for the 200k transactions are pretty big (about as many lines as our 3rdparty/ directory for each file). Do we need to include them at all or is it easy enough to reproduce them?
I was keeping them around as good exemplars while I fiddled with the plotting, but I think we don't need the csv files now.
This somewhat unrelated, but we should add units in the names of the metrics we publish.
I've moved a script from old tests to samples, cleaned it up, and added a docs page describing our micro-benchmark and e2e perf tests.
Open to rewrites on this: should we walk through extending
perf_client
for user apps? Should we show more comparisons of bad performance (memory blowup, busy machine)?