Closed tehlers320 closed 7 years ago
What are we comparing here? I'm assuming some part of these graphs is showing statsd and some is showing statsdaemon but I'm unclear what is what
statsd 0.8.0 processing time vs the aliasByNode(service_is_statsdaemon.instanceis$instance.statsd_typeis*.mtype_is_gauge.type_is_calculation.unit_is_ms, 2)
I guess the question is does this metric imply it takes 600ms before it processes the gauges and sends them or am i misreading that.
Closing i don't want to leave cruft open. I think this is a non issue more importantly is the droprate vs statsd-proxy and that is covered by another issue.
I may be misinterpreting the processing time metric but it looks like the following setup is outperforming statsdaemon at least in the "processing_time" category. I assumed that maybe the 4 nodejs pids should be added up but even so are much less processing time. Is this something to worry about does 600ms equal latency before a metric gets sent to graphite?
Setup statsd-proxy (c-code re-write) statsd 0.8.0 x4 m4.xlarge Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz statsdaemon v0.6 (built w/go1.8rc2, git hash 1.0.0-12-g1a031b9) (set to nice -1) carbon-c-relay is behind statsd doing aggregation (set to nice 19 to not steal cycles from statsdaemon)
There seems to be no performance difference between the two setups other than CPU usage reduction.
CPU usage does go down substantially when the test begins at 10:30
UDP packet drops (statsdaemon enabled 10:30) Even though statsdaemon is showing it is more efficient on cpu there are more drops than statd-proxy which uses SO_REUSEPORT (only kernel 3.9+).