Closed mbutrovich closed 6 years ago
A demonstration in the difference of precision for latency: (timestamps) is the new method calculated from the txn timestamp while (stats) is the old method using a Timer.
transaction latency (timestamps): 52.366000 transaction latency (stats): 52.379732 transaction latency (timestamps): 4.082000 transaction latency (stats): 4.086408 transaction latency (timestamps): 8.355000 transaction latency (stats): 8.360423 transaction latency (timestamps): 9.970000 transaction latency (stats): 9.973952 transaction latency (timestamps): 12.874000 transaction latency (stats): 12.879112 transaction latency (timestamps): 39.244000 transaction latency (stats): 39.249294 transaction latency (timestamps): 64.269000 transaction latency (stats): 64.277862 transaction latency (timestamps): 0.812000 transaction latency (stats): 0.816374 transaction latency (timestamps): 0.804000 transaction latency (stats): 0.809365 transaction latency (timestamps): 0.721000 transaction latency (stats): 0.725384 transaction latency (timestamps): 0.506000 transaction latency (stats): 0.508766
Previously, almost every function in TOTM would check the stats_mode settings and perform the appropriate update if stats are enabled. This PR modifies it to a single check in EndTransaction(), which will then iterate through the RWSet and update the appropriate counters. Note that we lose some accuracy with this method for tuples that were repeatedly accessed in a transaction. We also lose some precision on the transaction latency stat compared to the Timer that stats used before. See comment below for discussion of precision.
I punted on changing the stats functions to accept deltas for now since all the way down through backend_stats_context.cpp, access_metric.h, and counter_metric.h the only supported operations are increment and decrement. If that turns into a performance issue, we can revisit that change. Currently we run with stats_mode INVALID though so we shouldn't see these increments in our testing anyway.