Open Shitaibin opened 4 years ago
fatal error: runtime: out of memory
@janisz should we do something with it?
@Shitaibin can you give some stats about the testing box? If the runtime is out of memory I'm not inherently sure there's much we can do about that (doesn't mean we can't try). It could also be permissions, too. Does this happen on the same box, but with go1.13?
@mxplusb same error with go1.13 in this host. It passed all benchmark in my MacBook Pro with go1.13
@mxplusb @cristaloleg the cause is -benchtime
flag. If -benchtime
using the default value 1s
, all benchmark will passed. If -benchtime > 1s
, it will panic. But I don't know why this flag leads to the panic. Does it because of not free memory after each iteration of benchmark?
Info from: https://golang.org/cmd/go/#hdr-Testing_flags
-benchtime t
Run enough iterations of each benchmark to take t, specified
as a time.Duration (for example, -benchtime 1h30s).
The default is 1 second (1s).
The special syntax Nx means to run the benchmark N times
(for example, -benchtime 100x).
IIRC the internal default time resolution is 1s, but that shouldn't really affect -benchtime
too much. Does go test -v -bench=.
not provide what you need?
Comparing without -v
flag, it just a few more TEST PAUSE/ PASS information. These information is useless.
My host has 8G memory, does it matter?
8G is more than enough. Looking at what you've uploaded, it's throwing the same error, but in a different location:
goroutine 601 [running]:
runtime.systemstack_switch()
/home/centos/go/src/runtime/asm_amd64.s:330 fp=0xc04a220bc8 sp=0xc04a220bc0 pc=0x459ab0
runtime.(*mheap).alloc(0x678480, 0x1, 0x7f10a0010005, 0x7f10aecaa668)
/home/centos/go/src/runtime/mheap.go:1085 +0x8a fp=0xc04a220c18 sp=0xc04a220bc8 pc=0x42472a
runtime.(*mcentral).grow(0x678960, 0x0)
/home/centos/go/src/runtime/mcentral.go:255 +0x7b fp=0xc04a220c58 sp=0xc04a220c18 pc=0x41678b
runtime.(*mcentral).cacheSpan(0x678960, 0x1bf)
/home/centos/go/src/runtime/mcentral.go:106 +0x2fe fp=0xc04a220cb8 sp=0xc04a220c58 pc=0x4162ae
runtime.(*mcache).refill(0x7f10b479fd98, 0x205)
/home/centos/go/src/runtime/mcache.go:138 +0x85 fp=0xc04a220cd8 sp=0xc04a220cb8 pc=0x415d55
runtime.(*mcache).nextFree(0x7f10b479fd98, 0xc1cbc3d505, 0xc1cbfffbf0, 0x7f10aecaa668, 0x67be00)
/home/centos/go/src/runtime/malloc.go:854 +0x87 fp=0xc04a220d10 sp=0xc04a220cd8 pc=0x40b6e7
runtime.mallocgc(0x7, 0x0, 0x502400, 0xc028c51320)
/home/centos/go/src/runtime/malloc.go:998 +0x5ae fp=0xc04a220db0 sp=0xc04a220d10 pc=0x40be3e
runtime.slicebytetostring(0x0, 0xc04a220e71, 0x7, 0x7, 0x0, 0x0)
/home/centos/go/src/runtime/string.go:102 +0x9f fp=0xc04a220de0 sp=0xc04a220db0 pc=0x446c0f
strconv.formatBits(0x0, 0x0, 0x0, 0x15c467, 0xa, 0x7ff8f78752670000, 0xc04a220ef0, 0x4fe575, 0xc0000741b0, 0x87f008, ...)
/home/centos/go/src/strconv/itoa.go:200 +0x318 fp=0xc04a220e98 sp=0xc04a220de0 pc=0x484d08
strconv.FormatInt(0x15c467, 0xa, 0x15c467, 0x0)
/home/centos/go/src/strconv/itoa.go:29 +0xdb fp=0xc04a220f00 sp=0xc04a220e98 pc=0x48499b
strconv.Itoa(...)
/home/centos/go/src/strconv/itoa.go:35
github.com/allegro/bigcache/v2.readFromCacheNonExistentKeys.func1(0xc033a26000)
/home/centos/gopath/src/github.com/allegro/bigcache/bigcache_bench_test.go:175 +0x75 fp=0xc04a220f60 sp=0xc04a220f00 pc=0x510f75
testing.(*B).RunParallel.func1(0xc01fe4c8b0, 0xc01fe4c8a8, 0xc01fe4c8a0, 0xc0ac710380, 0xc0339842c0)
/home/centos/go/src/testing/benchmark.go:742 +0xa1 fp=0xc04a220fb8 sp=0xc04a220f60 pc=0x4c5621
runtime.goexit()
/home/centos/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc04a220fc0 sp=0xc04a220fb8 pc=0x45ba01
created by testing.(*B).RunParallel
/home/centos/go/src/testing/benchmark.go:735 +0x192
Before it was the bytesQueue
that was failing to allocate more memory, now it's an anonymous function that can't convert an integer to a string. I don't think this is an issue with bigcache, I think this is an issue with your testing machine, I can't replicate the issue and to my knowledge there's nothing happening in the CI system.
@janisz what do you think?
I think this is an issue with your testing machine,
@Shitaibin can you share your ulimit -a
configuration?
The different location is because of using -benchtime=2s
。
Using -benchtime=4s
, it always show the same location. ulimit and details see the log file:
centos-benchtime=4s-ulimit.txt
I run the benchmark using the same parameters. It always output another panic error: panic: runtime error: slice bounds out of range [16:0]
. ulimit and details see the log file:
macos-benchtime=4s-ulimit.txt
I made a group tests using -benchtime
and -run=^$ .
. The result is :
With -run=^$
, it always pass. Without -run=^$
, it alway outputs these logs before benchmark:
2020/03/18 02:11:29 Allocated new queue in 264ns; Capacity: 94
2020/03/18 02:11:29 Allocated new queue in 749.477µs; Capacity: 585000
2020/03/18 02:11:29 Allocated new queue in 119.722µs; Capacity: 1170000
Does it matter? Why -run=^$
affect the benchmark behavior.
Running benchmark will running benchmarks after running tests.
-run=^$
: not match to any tests. So the cause of panic may be related to tests.
Myabe it's related to https://github.com/allegro/bigcache/issues/148
What is the issue you are having?
Benchmark paniced with go1.14 in centos7.
What is BigCache doing that it shouldn't?
Should not panic when allocating memory.
Minimal, Complete, and Verifiable Example
Environment:
/etc/os-release
or winver.exe):CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"