aerospike / aerospike-client-go

Aerospike Client Go
Apache License 2.0
429 stars 199 forks source link

About ClientPolicy.MinConnectionsPerNode vs server's proto-fd-idle-ms #374

Open xqzhang2015 opened 2 years ago

xqzhang2015 commented 2 years ago

Client

Issue

Here is the drop-idle-connections logic.

https://github.com/aerospike/aerospike-client-go/blob/b936c147db0d7ac5f881c3bde9df87e9a6829dfa/connection_heap.go#L237

For example, if we set ClientPolicy.MinConnectionPerNode to 5, these 5 idle connections may have no chance to check if it is idle or already timeout. Of course, This case is under not busy Aerospike access scenario.

For these idle conns, if it already exceeds server's proto-fd-idle-ms, Aerospike server may proactively close the conn and client conn hasn't realized that. Then the client will get such error

Node BB9353C99FXXX 10.y.y.y:3000: write tcp 10.x.x.x:46554->10.y.y.y:3000: write: broken pipe

Question

Is it possible to fully iterate the conns pool and clean up the idle ones?

khaf commented 2 years ago

You being on v1.x is the bigger news to me :) The assumption has been that those connections are dropped and the command is usually retried. Is that not the case?

xqzhang2015 commented 2 years ago

Yes, it's retried.

Let me provide more details:

  1. For the broken pipe log, it's from this line https://github.com/aerospike/aerospike-client-go/blob/b936c147db0d7ac5f881c3bde9df87e9a6829dfa/command.go#L2169
  2. The command is Get()
    func (clnt *Client) Get(policy *BasePolicy, key *Key, binNames ...string) (*Record, error) {
  3. The command policy uses default values: MaxRetries=2, SleepBetweenRetries: 1ms Because I set the conn-pool to 5, and initial-execute and retried-execute are total of 3 times, I get 3 error logs and each log interval is 1ms:
    I0520 02:44:42.754393 command.go:2169] [aerospike] Node BB9353C99F5580E 10.y.y.y:3000: write tcp 10.x.x.x:57552->10.y.y.y:3000: write: broken pipe
    I0520 02:44:42.755625 command.go:2169] [aerospike] Node BB90D6223B88B0A 10.y2.y2.y2:3000: write tcp 10.x.x.x:54522->10.y2.y2.y2:3000: write: broken pipe
    I0520 02:44:42.756857 command.go:2169] [aerospike] Node BB9353C99F5580E 10.y3.y3.y3:3000: write tcp 10.x.x.x:33098->10.y3.y3.y3:3000: write: broken pipe

    Then the last write: broken pipe is returned by Client.Get().

xqzhang2015 commented 2 years ago

@khaf any further comment?

khaf commented 2 years ago

I think I understand the issue, I just need to implement the solution. ETA this week.

xqzhang2015 commented 2 years ago

@khaf thanks so much. Looking forward to good news.

xqzhang2015 commented 2 years ago

@khaf any update on this thread?

We recently encountered another issue because of this.

Issue

As cluster has 48 nodes. One client sets ClientPolicy.MinConnectionsPerNode to 100. So the expected connections to as cluster should be 4800. And we foundation nodeStats.ConnectionsOpen is 4800. But by using netstat -apn | grep ":3000" | wc -l, it shows only about ~1300.

Analysis

We set min-conns to handle peak traffic. For general traffic, we don't need so many conns, and partial conns will be idle-timeout. But these partial conns will have no chance to be closed, even if idle-timeout, because of min-conns. Only for the next traffic peak, these conns will be touched, but maybe with aerospike access error or trigger creating many new conns.

Proposal

khaf commented 2 years ago

Sorry another high priority issue popped up that I had to take care of. I'll release the fix to this issue tomorrow.

xqzhang2015 commented 2 years ago

@khaf thanks for the update. We are using both v4.x and v5.x clients. Looking forward to the release.

khaf commented 2 years ago

I haven't forgotten about this ticket. The solutions that I came up with this week have not been satisfactory. Since this is an important feature of the client and I'll have to port it forward to the newer versions (and quite possibly to other clients as well), I need to take my time. But this is my current active issue, so will solve it before moving to other work. Tell me, are you using Go modules yet? Should I make a v4 branch with module support for the client?

xqzhang2015 commented 2 years ago

@khaf Yes, we're using Go mod. BTW, it seems v4 branch currently works well with Go mod.

khaf commented 2 years ago

OK, after writing and testing the code for this for the client, it turned out in an engineering meeting that the correct resolution for this issue is to set the server idle time to 0 to avoid reaping the connections from the server side. Please let me know how that goes.

khaf commented 1 year ago

Closing this. Feel free to reopen or issue a new ticket in case you had further question.

xqzhang2015 commented 1 year ago

@khaf We just reencountered this issue recently. Previously we just disabled conn-pool in not-so-busy environments.

I understand the idea of setting the server idle timeout to 0. But there is a challenge that our Aerospike cluster is shared by multi-services and multi-team, which is hard to make sure each service/team closes the idle connections duly, so as to use system resources efficiently.

Is it possible to re-evaluate the solution?

BTW, if it still prefers to current solution, is it possible to add a comment to IdleTimeout or MinConnectionsPerNode, to avoid other misunderstandings?

Thanks

khaf commented 1 year ago

I reopened the issue, so that I can track it. I don't know if the product or engineering will accept any change or addition to the current solution, but I'll bring it up.

For the comments on ClientPolicy, what would you like to be added for clarification?

ps: What version of the client are you currently on?

xqzhang2015 commented 11 months ago

@khaf sorry for the late reply.

For the comment, like When MinConnectionsPerNode is enabled, server side must set the idle time to 0

And my APP version is v4.5.0.

xqzhang2015 commented 10 months ago

@khaf any update for this thread?

nkonopinski commented 5 months ago

Any update here? I'm running client 6.13 and see number of connections drop below MinConnectionsPerNode. Server CE 6.3.0.1 with proto-fd-idle-ms set to the default of 0. Connections drop too low and when traffic spikes my requests timeout.