Open eperott opened 4 years ago
I'm working on a fix for this.
Your exporter is working for two weeks without any issues. Since today the BufferOverflow occurs permanently. So, I am sorry but I must install an other exporter tomorrow. Best regards
One Cluster works and another Cluster with same Configuration does not. Exporter seems to be not reliable. Cassandra 4.1.2 Exporter 0.9.12
same as by me. Meanwhile I am testing alternatives.
After new data (new tables) on one of the clusters (with a working exporter) the exporter now also fails with the described message above. So it seems that the error is triggered only when (much?) data is on the cluster.
The release from edgelaborities has fixed the issue for me. (AFAIK because they just merged https://github.com/instaclustr/cassandra-exporter/pull/84)
Hi, so what would it take to get this merged? We have sucesfully been running https://github.com/edgelaboratories/cassandra-exporter which essentially is 0.9.12 with this. so I realy want to upgrade to 0.9.14 (or later) and seen from my chair the best way is to get off the fork https://github.com/edgelaboratories/cassandra-exporter and onto instacluster again.. but this nio thingy prevents me from doing that... I can alos do a fork myself.. but I realy think getting the instacluster version not dying with the nio thingy is the best approach. The plan is to test this on cassandra 4.1.6. Please let me know what you think
The change to the Netty ChunkedNioStream introduced a regression.
This happens because the ChunkedNioStream will not flush the buffer unless it has reached the actual chunk limit, instead it will just make another call to
ReadableByteChannel.read()
.