gogf / gf

GoFrame is a modular, powerful, high-performance and enterprise-class application development framework of Golang.
https://goframe.org
MIT License
11.79k stars 1.61k forks source link

os/gcache: memory leak #3817

Closed qinyuguang closed 2 months ago

qinyuguang commented 2 months ago

Go version

1.22

GoFrame version

2.7.4

Can this bug be reproduced with the latest release?

Option Yes

What did you do?

package main

import (
    _ "net/http/pprof"

    "context"
    "log"
    "math"
    "net/http"
    "time"

    "github.com/gogf/gf/v2/os/gcache"
)

func main() {
    go func() {
        log.Println(http.ListenAndServe("localhost:18888", nil))
    }()

    ctx := context.Background()

    cacher := gcache.New(10000)

    for i := 0; i < math.MaxInt; i++ {
        _ = cacher.Set(ctx, i, i, 0)
        time.Sleep(time.Microsecond)
    }

    log.Println("done")
}

watch with monitor or use go pprof

What did you see happen?

memory leak

image image

What did you expect to see?

stable memory usage

gqcn commented 2 months ago

@qinyuguang Let me have a check.

helloteemo commented 2 months ago

@gqcn @qinyuguang It seems to be related to LRU scheduled cleaning. The speed of set key in this code snippet is very fast, and they are all UniqueKeys. This leads to an increase in the number of keys in the cache beyond the cleaning capacity (removing time.Sleep(time.Microsecond) will make it more detailed). In fact, this kind of scenario occurs relatively rarely.

test code

func main() {
    go func() {
        log.Println(http.ListenAndServe("localhost:6060", nil))
    }()

    ctx := context.Background()

    cacher := gcache.New(10000)

    go func() {
        for {
            size, _ := cacher.Size(ctx)
            log.Println(size)
            time.Sleep(time.Second)
        }
    }()

    for i := 0; i < math.MaxInt; i++ {
        _ = cacher.Set(ctx, i, i, 0)
    }

    log.Println("done")
}

log:

2024/09/27 14:26:25 227
2024/09/27 14:26:26 2051642
2024/09/27 14:26:27 3591917
2024/09/27 14:26:28 6056968
2024/09/27 14:26:29 6846812
2024/09/27 14:26:30 7291025
2024/09/27 14:26:31 8709386
gqcn commented 2 months ago

@gqcn @qinyuguang It seems to be related to LRU scheduled cleaning. The speed of set key in this code snippet is very fast, and they are all UniqueKeys. This leads to an increase in the number of keys in the cache beyond the cleaning capacity (removing time.Sleep(time.Microsecond) will make it more detailed). In fact, this kind of scenario occurs relatively rarely.

test code

func main() {
  go func() {
      log.Println(http.ListenAndServe("localhost:6060", nil))
  }()

  ctx := context.Background()

  cacher := gcache.New(10000)

  go func() {
      for {
          size, _ := cacher.Size(ctx)
          log.Println(size)
          time.Sleep(time.Second)
      }
  }()

  for i := 0; i < math.MaxInt; i++ {
      _ = cacher.Set(ctx, i, i, 0)
  }

  log.Println("done")
}

log:

2024/09/27 14:26:25 227
2024/09/27 14:26:26 2051642
2024/09/27 14:26:27 3591917
2024/09/27 14:26:28 6056968
2024/09/27 14:26:29 6846812
2024/09/27 14:26:30 7291025
2024/09/27 14:26:31 8709386

Yes, I've figured that, and also resolved this https://github.com/gogf/gf/pull/3823 .