Open 2opremio opened 10 years ago
+1 on this. We're hitting the same thing I believe.
@2opremio, have you worked around this in your application in any sort of elegant way? Right now we just recycle the app server processes which sorts things out but that's obviously heavy handed.
Increasing the retry value helps mitigating the problem earlier, but it's also risky cause will prevent you from failing early if connections are stuck and not in CLOSE_WAIT (in that case operations will block for a time of (retry+1)*timeout )
Whenever we restart Cassandra machines, our pycassa clients end up getting a lot of TMaximumRetryException exceptions due to "TSocket read 0 bytes" errors when accessing the connections.
This is due to sockets in the pool being in CLOSE_WAIT after the cassandra machines are restarted (plus our connection pool being larger than the number of machines in our cassandra cluster and the number of retries being smaller than the pool, which I believe is a common setting)
Although this is a problem in all our pycassa clients, it's particularly bad in machines with low traffic since it takes quite some time for them to exhaust the sockets in CLOSE_WAIT state (in some cases it can take hours).
Since it's possible (at least in Linux) to check whether a socket is in CLOSE_WAIT with getsockopt() I would propose doing so when getting connections from the pool.
If there's a concern about the performance implications of the extra checks, I wouldn't mind hiding them behind a ConnectionPool setting defaulting to false.
Another possibility would be to propagate some extra information about the failure in TMaximumRetryException which tells the programmer that the socket was in CLOSE_WAIT, letting him decide on whether it's a good idea to retry or not.
Some other, not very elegant solutions.