Closed jcarres-mdsol closed 4 years ago
yeah it would be a buffer/pipeline issue. you could buffer more in instrumentation or introduce buffering on the server.
Ex in kafka it is safe to try to collect more because the backlog is persistent. I think in zipkin-aws there's another buffering layer used for SQS for a similar problem.
cc @llinder @mansu for thoughts.
So it seems zipkin server would benefit from having a buffer functionality which can be used for all inputs?
I think we've had a discussion about when Kafka is present, we can easily do this (since kafka is persistent and we don't need to worry about losing data)
If you are referring to the normal HTTP endpoint, it would be a little more complex because we'd have to decide where to pool the data (so that N http requests don't mean N buffers which would blow up memory)
I know that many tracers have a buffer configuration which in a pinch can help (provided they can be adjusted)
For SQS it buffers on the sender before writing to SQS. This helps to make use of the 256KB message cap that SQS imposes and reduces API calls. The SQS collector only reads as fast as the storage layer accepts writes. For our use case SQS is effectively an off memory buffer just as Kafka would be.
Looking at the MySQLSpanConsumer I don't see any fixed 6KB limiting logic. It might be possible to introduce some logic to dynamically adjust the batch size to some tunable value though.
If your stuck with MySQL for storage and your using HTTP as the transport layer I would probably consider augmenting writes with SQS or Kafka just to avoid spikes in traffic from overwhelming your storage layer.
Beyond tuning batch inserts for a specific storage component, I don't think there is much benefit in Zipkin server buffering anything since there are much better solutions such as SQS, Kafka or a more scalable storage layer.
thanks for the notes. we do need to make a deployment guide, and this stuff will need to become a part of it.
I've run into this exact issue. I'm using a cheap db.t2.micro
RDS instance that is dedicated to Zipkin. I saw bursts of up to ~1350 IOPS with average write sizes of 6.1KB, quickly exhausting the IO credit balance.
Since the Zipkin MySQL client isn't batching and probably won't be any time soon, I "detuned" MySQL using the innodb_flush_log_at_trx_commit
parameter. RDS's default parameter group doesn't set this and unless you've overridden it in your own parameter group then MySQL will default to flushing the InnoDB log to disk after every transaction.
Since I don't especially care about the durability of trace data I set this parameter to 2
:
With a value of 2, the contents of the InnoDB log buffer are written to the log
file after each transaction commit and the log file is flushed to disk approximately
once per second. Once-per-second flushing is not 100% guaranteed to happen
every second, due to process scheduling issues. Because the flush to disk
operation only occurs approximately once per second, you can lose up to a
second of transactions in an operating system crash or a power outage.
With this change I saw that MySQL performs slightly larger writes but many fewer for the same workload. This was enough to make MySQL/RDS a workable solution for me.
See also: https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit.
These are great notes. Thanks for sharing.
Even though there is a lot of useful mysql tuning info in this issue, i'm going to go ahead and close it based on the fact that mysql storage is no longer considered "for production usage" and mostly exists "to help aid transition to supported ones" (see https://github.com/openzipkin/zipkin/tree/master/zipkin-server#mysql-storage). Given that, write performance improvements are not likely to be implemented by the core team in the foreseeable future.
As always, if your site is bothered by this feel free to dive in the technical details and raise a PR it will definitely be considered !
I have a server using Mysql managed by Amazon's RDS
It is right now averaging 1200 IOPS and 7MB/s of writes. This means the average IO is writing 6KB
The limit on how much data RDS can read/write is based on the number of IO. If for instance those writes were near the limit of 16KB instead of the current 6KB, the performance of the database will more than double.
I am guessing to accomplish this, the server would need to buffer before writing the traces to the DB which may not be an easy fix.