Open grobian opened 9 years ago
I'm definitely accepting patches. ;-)
It should be with a little bit of thinking. There's currently no concept of replication, but adding that to the configuration the buckyd daemon holds and having the client deal with it should be fairly easy. It will also help make the way I deal with metrics in the client consistent.
Then its just understanding the sync operation (and the implications for the tar operation).
Was any contribution provided to handle replication factors > 1 yet? I suspect that due to my replication factor of 2, bucky inconsistent reports are possibly somewhat inaccurate. In the mean time, do you have any recommendations on how to work around this limitation when it comes to executing a rebalance?
You are correct. I presently don't have a work around here. I had planned a work project that would have involved some more code here this month (August). However, work gifted me other explosions...
@jjneely - did you ever start the work you had planned for supporting replication factors > 1?
Alas, I have not. The clients I was working with that required large Graphite clusters have all moved on in one way or another.
In our setup we use multiple clusters with replication = 2, and each of those clusters live in two locations (at minimum). Due to the nature of the application (high-volume), and the environment (SSDs crash like crazy, switches drop under pressure of microbursts) we need to sync both within the cluster (hence replication = 2) as well as between the clusters (remote).
Would this setup be anywhere near possible to implement in the current setup of bucky? If so, what would need to be changed, and could we/I contribute somewhere?