Open sb10 opened 4 years ago
/cc @aclements @rsc @randall77 @dvyukov
It seems that you run out of memory on the machine. Errno 12 is ENOMEM. Maybe you disable overcommit/swap/set ulimit -v/use memcg.
It turns out I only have this problem on a machine where processes have a 5GB per process limit enforced.
$ ulimit -m
5000000
Though LSF reports that peak memory when using race is less than 2GB, average 980MB. Peak memory without race is reported as 800MB, average 300MB.
Is this difference in memory usage expected? Could there be a very brief >5GB heap allocation with race that LSF doesn't detect in its peak memory usage report?
What exactly is that -m? Are you sure you restrict and measure the same memory? RSS? Virtual? Allocated? Locked? Accounted? There are lots of them :)
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 541143
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) 5000000
open files (-n) 131072
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 541143
virtual memory (kbytes, -v) 5000000
file locks (-x) unlimited
So -m
is "max memory size". I don't know how LSF measures memory usage. But in any case, it seems that a lot more memory is being used than normal, that it hits this limit.
Memory use increase under race detector is very much expected, see: https://golang.org/doc/articles/race_detector.html#Runtime_Overheads So I would say increase/don't set the limits. If you set limits, at some point normal program will crash as well. It's not possible to achieve that any program works with any limits.
Sure, I expect some memory usage increase, the question is, is this much of an increase a possible bug? Is there a bad interaction between go-deadlock and the go race detector that uncovers some unexpected run-away memory usage? Or is the memory usage legitimate?
If there's no easy way to answer this question, I guess this issue can be closed.
Sure, I expect some memory usage increase, the question is, is this much of an increase a possible bug?
This has some reference numbers: https://golang.org/doc/articles/race_detector.html#Runtime_Overheads Is memory consumption increase way above these numbers?
Is there a bad interaction between go-deadlock and the go race detector that uncovers some unexpected run-away memory usage?
I don't know. You are filing the bug, so I assume you have answers :) Does memory usage grow infinitely under race detector whereas it does not grow in normal build?
hello, I'm seeing this issue too on the github actions runner: https://github.com/benitogf/level/runs/2772001793?check_suite_focus=true only on windows though
redirected here: https://github.com/golang/go/issues/22553 since the error code is different (1455)
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
What did you expect to see?
Tests should pass cleanly the same way they do without race (
go test -p 1 -tags netgo --count 1 ./jobqueue -run TestJobqueueRunners -v
).What did you see instead?
Variations on:
Which exits the test. It happens during a seemingly random test each attempt.
Additional info:
github.com/sasha-s/go-deadlock
back tosync
, but I want to run go-deadlock in production to catch deadlock bugs that my tests aren't finding.As a bad interaction between go-deadlock and the go race detector, I have no idea how to debug this.