Closed hannosch closed 10 years ago
I talked with Rob about this, and we concluded that heka-py's approach of trying to capture any possible message type isn't a good idea.
We've since switched the particular project over to using a straight statsd client, which dropped the 13ms of overhead to almost zero.
I've been loadtesting one of our sites and found that heka-py was responsible for about half of the processing time of each request, specifically the protobuf encoding of messages. The site in question uses about 10 messages (counters and timers, no free form messages) per request. On the specific box heka took about 13ms of processing time per request.
There seems to be a protobuf C encoder for Python, but it's not included in the PyPi release of protobuf (2.5.0) and the build process is weird. Ideally heka-py would make it easier to have a performant setup including a fast protobuf encoder.