Closed 1819997197 closed 4 years ago
Hello @1819997197,
This looks like a Redis Go Driver issue and less than an Iris one. I think it's better to open that issue on mediocregopher/radix repository instead. Let me link an issue there, as its author is active and helped on an issue a neffos user had in the past as well.
Please join us to the chat, it looks like I should know about you, 3 million keys with the same prefix (session info) is a lot.
hello @kataras.
Yes, this looks like a Redis Go Driver problem. However, there is one point that I have never wanted to understand. There are multiple string keys with the same prefix in the login state of a user, why not store it as a hash map?
Hello @1819997197,
We used to save them as hash map but it was slower and someone, like you, asked to revert back to simple keys (with prefix). However, if you can push a PR (now that you have the stack to compare the performance better than myself) we can have an option for redis database which will save the data using Hash Map or Multiple Keys Prefixed :)
hello @kataras.
Okay, let me verify the performance first.
That sounds great, don't hesitate to open a PR anyways, we can discuss it as you are writing it too, we do not need to hurry here.
Okay, thank you
hello @kataras.
I did a performance test today, 2 million hash keys:
func BenchmarkPivotIndex(b *testing.B) {
db := redis_hash.New(redis_hash.Config{
Network: "tcp",
Addr: "",
Timeout: time.Duration(3600) * time.Second,
MaxActive: 10,
Password: "",
Database: "",
Prefix: "ucweb:sess:",
Driver: redis_hash.Radix(),
Clusters: []string{"10.10.18.41:7000", "10.10.18.41:7001", "10.10.18.41:7002", "10.10.18.42:7003", "10.10.18.42:7004", "10.10.18.42:7005", "10.10.18.44:7006", "10.10.18.44:7007", "10.10.18.44:7008"},
})
defer db.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
rand.Seed(time.Now().UnixNano() + int64(i))
num := rand.Intn(100000000) % 160000
uid := fmt.Sprintf("%v", num+40000)
db.Get(uid, "uid")
_ = db.OnUpdateExpiration(uid, time.Duration(86400*7)*time.Second)
}
}
Test Results:
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 470 2916862 ns/op
PASS
ok test/iris_sess_ext 1.659s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 314 3394616 ns/op
PASS
ok test/iris_sess_ext 1.344s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 415 3751563 ns/op
PASS
ok test/iris_sess_ext 2.669s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 369 2889817 ns/op
PASS
ok test/iris_sess_ext 1.400s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 447 3229066 ns/op
PASS
ok test/iris_sess_ext 2.624s
Looks good, can you test the current implementation too?(it should be trivial now)
Yes, but it will take some time.
hello @kataras.
Performance tests currently implemented. 2 million hash keys:
func BenchmarkPivotIndex2(b *testing.B) {
db := redis.New(redis.Config{
Network: "tcp",
Addr: "", //10.11.0.116:6379
Timeout: time.Duration(30) * time.Second,
MaxActive: 10,
Password: "",
Database: "",
Prefix: "ucweb:redis:",
Delim: "-",
Driver: redis_drive.RadixCluster(), // redis.Radix() can be used instead.
Clusters: []string{"10.10.18.41:7000", "10.10.18.41:7001", "10.10.18.41:7002", "10.10.18.42:7003", "10.10.18.42:7004", "10.10.18.42:7005", "10.10.18.44:7006", "10.10.18.44:7007", "10.10.18.44:7008"},
})
defer db.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
rand.Seed(time.Now().UnixNano() + int64(i))
num := rand.Intn(100000000) % 200000
uid := fmt.Sprintf("%v", num)
db.Get(uid, "uid")
_ = db.OnUpdateExpiration(uid, time.Duration(86400*7)*time.Second)
}
}
Test Results:
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 1 1796369835 ns/op
PASS
ok test/iris_sess_ext 1.856s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 1 1467857469 ns/op
PASS
ok test/iris_sess_ext 1.526s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 1 1313175806 ns/op
PASS
ok test/iris_sess_ext 1.399s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 1 1173946450 ns/op
PASS
ok test/iris_sess_ext 1.229s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 1 1392850879 ns/op
PASS
ok test/iris_sess_ext 1.472s
Almost all operations require more than 1s.
@1819997197 I don't see performance boost to hash vs current implementation, or am I not reading them correctly?
hello @kataras.
Four million redis-keys, two million each for string-key and hash-key. Performance test: get a random session value and update the expiration time.
hash-keys(average time is 3ms):
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 470 2916862 ns/op
PASS
ok test/iris_sess_ext 1.659s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 314 3394616 ns/op
PASS
ok test/iris_sess_ext 1.344s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 415 3751563 ns/op
PASS
ok test/iris_sess_ext 2.669s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 369 2889817 ns/op
PASS
ok test/iris_sess_ext 1.400s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 447 3229066 ns/op
PASS
ok test/iris_sess_ext 2.624s
string-keys(average time is 1s):
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 1 1796369835 ns/op
PASS
ok test/iris_sess_ext 1.856s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 1 1467857469 ns/op
PASS
ok test/iris_sess_ext 1.526s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 1 1313175806 ns/op
PASS
ok test/iris_sess_ext 1.399s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 1 1173946450 ns/op
PASS
ok test/iris_sess_ext 1.229s
[vagrant@localhost iris_sess_ext]$ go test -v -bench .
goos: linux
goarch: amd64
pkg: test/iris_sess_ext
BenchmarkPivotIndex
BenchmarkPivotIndex-2 1 1392850879 ns/op
PASS
ok test/iris_sess_ext 1.472s
// Different machine configurations may have a little error
Oh yes I see now, I could see above too! The difference is huge. Would you like to do it through PR? You should get the credits for that one. Just don't forget to add a sessiondb/redis.Config.DisableHashMap bool
(by-default this hash map will be enabled) and implement the solution on the driver(s) checking this configuration field (I dont like the two methods to set and retrieve a key but I am sure in the future some1 will ask for prefixed keys instead...so, let's have both of them and let users decide what suits best for their needs).
Okay, my own code implements the Database interface(github.com/kataras/iris/v12@v12.1.8/sessions/database.go:24) through extensions. Submit a PR, I may also need to consider the compatibility of the existing string-keys version to see how to integrate more perfectly.
Hello Makis!
In a production environment, the redis cluster has more than three million keys. When the program operates the session (update the session expiration time), the redis cluster cpu soars to 50%, which takes several seconds.
While debugging I found place where freezes, it`s here:
Session storage mechanism, a user login state will have multiple string keys. As the number of users increases, redis key will gradually become very large. I extended a new version, the session storage in redis was changed to hash, and it also supports redis stand-alone mode and cluster mode. Hope to provide convenience for the vast number of development compatriots.