mediocregopher / radix.v2

Redis client for Go
http://godoc.org/github.com/mediocregopher/radix.v2
MIT License
433 stars 92 forks source link

stability hidden trouble of the pool when there are frequent peaks and troughs request #52

Closed zhaoyuxi closed 7 years ago

zhaoyuxi commented 7 years ago

The Pool is too simple. If there are peaks and troughs requests, some connections has the risk to be created but closed soon. This will effect the stability of the pool. Creation of the connection probably spend serveral seconds if there is some special authentication. And make the creation and close to be worse. Then heavily effect the request latency

// Put returns a client back to the pool. If the pool is full the client is // closed instead. If the client is already closed (due to connection failure or // what-have-you) it will not be put back in the pool func (p Pool) Put(conn redis.Client) {

My suggestion solution: type Client struct { // The most recent activating time LastAvailableTime time.Time } type Pool struct { //pool chan redis.Client pool list.List mutex sync.Mutex } type Cluster struct { // all pools list poolsSet map[pool.Pool]int poolsSetMutex sync.Mutex pingTicker *time.Ticker } type Opts struct { // The maximum size of the connection pool to use for each host. Default is 10 PoolSize int // The initial size of the connection pool to use for each host. // Suggest set it to be less than 10 to avoid hang up the Cmd calling for a long time InitialSize int // The idle size of the connection pool to use for each host. IdleSize int // Max idle time means there must be PING at least to be sent at the specified period. Otherwise, the connnection probably is unavailble. MaxIdleTime time.Duration }

// heartBeat to keep the connetion active. Do some optimization on the suggestion in #51 func (c Cluster) heartBeat() { for { select { case <-c.pingTicker.C: allConnNum := c.getConnNum() pools := c.getAllPools() for _, pool := range pools { heartbeatNum := c.getHeartBeatConnNum(time.Second, allConnNum, pool) for i := 0; i < heartbeatNum; i++ { idleCon, err := pool.GetIdleConn(10 time.Second) // idle for ten seconds if err != nil { } else if idleCon == nil { } else { // It's safe to close it now if exceeding the limitation // Avoid the connection to be closed when there are frequent peaks and troughs if pool.Avail() >= c.o.IdleSize { idleCon.Close() } else { idleCon.Cmd("PING") pool.Put(idleCon) } } }

            if pool.Avail() < c.o.IdleSize {
                con, err := pool.CreateNewClient() //Lazy to create connection to reduce the spent time of creating pool
                if err != nil {
                }
                if con != nil {
                    pool.Put(con)
                }
            }
        }
    case <-c.stopCh:
        c.pingTicker.Stop()
        return
    }
}

}

mediocregopher commented 7 years ago

Hi there @zhaoyuxi, and thanks for all the time you've put into this! I'm currently putting all of my effort into implementing radix.v3, where Pool will be implemented as an interface and therefore custom implementations like yours can be easily used. But in the meantime I don't necessarily want to make any big changes to the current implementation of Pool or Cluster, since I'd rather just put the time into working on v3 which will replace them soon anyway. Sorry to shut you down like this, hopefully in the meantime you can work off your fork of the project. Thanks for the interest though!

zhaoyuxi commented 7 years ago

Radix.v2 helps me so much and thanks for your effort and your reply. Expect for your radix.v3. Probably there are some network limitations that I can’t upload my modifications. If any interest issue has been found, I will be happy to react it in detail.

mediocregopher commented 7 years ago

Good to hear, thanks!