Closed jordanlewis closed 7 years ago
I stopped the build but left the instance running in case someone wants to do forensics.
Seems like an issue shutting down the server after a test. Not much else to go on.
@WillHaack seems to have more details on this. Could you chime in?
I believe @cuongdo is correct. Some tests may timeout, and when they do I'm not sure they close down the servers properly so the subsequent tests may have trouble binding to ports. (Also, I'm not even sure that they close their connections properly when they successfully run.) I'll try to work on fixing this today. If all else fails it may be worth making it so that multinode tests only run with the -multinode flag until further notice. (IMO it's not a big deal to keep multinode tests that are failing, but it is a big deal if they are causing other tests to fail.)
https://github.com/cockroachdb/cockroach/pull/8075 likely fixed the issue. The benchmark tests have been running for 7+ hours now. They were failing consistently in about 2.5hrs before.
@WillHaack I think they're broken in a different way now. Can you please investigate? They've now been running for 30 hours -- seems like they might have stalled again. https://teamcity.cockroachdb.com/viewLog.html?buildId=7571&tab=buildResultsDiv&buildTypeId=Cockroach_BenchmarkTests
Do they now pass, with #10237 in?
Looks like they are broken by the move to azure builders? Tonight's run failed with "terraform is not in your path": https://teamcity.cockroachdb.com/viewLog.html?tab=buildLog&logTab=tree&filter=debug&expand=all&buildId=37243#_focus=724
Probably we should run this orchestration from inside the builder container instead of adding terraform to the TC builders. cc @jordanlewis
The benchmark tests still fail. It seems that a bunch of tests fail that require Postgres to be installed. Did we used to ignore these somehow? The move to Azure didn't affect what is installed and running on the benchmark machines.
Current issue is that sql.BenchmarkPgbenchExec_{Cockroach,Postgres}
fail in the absence of pgbench, which we do not install anywhere. Updated the description and assigned @dt.
The Pgbench
tests now do not fail - thanks @dt. The last remaining failures are runs of BenchmarkInsertDistinct100Multinode_Cockroach
. The logs don't have anything particularly useful, but here they are for completeness:
BenchmarkInsertDistinct100Multinode_Cockroach-16 --- FAIL: BenchmarkInsertDistinct100Multinode_Cockroach-16
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
BenchmarkInsertDistinct100Multinode_Cockroach-16 --- FAIL: BenchmarkInsertDistinct100Multinode_Cockroach-16
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
BenchmarkInsertDistinct100Multinode_Cockroach-16 1000 1026240 ns/op 877898 B/op 4798 allocs/op
BenchmarkInsertDistinct100Multinode_Cockroach-16 --- FAIL: BenchmarkInsertDistinct100Multinode_Cockroach-16
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
BenchmarkInsertDistinct100Multinode_Cockroach-16 2000 992051 ns/op 811930 B/op 4099 allocs/op
BenchmarkInsertDistinct100Multinode_Cockroach-16 --- FAIL: BenchmarkInsertDistinct100Multinode_Cockroach-16
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
BenchmarkInsertDistinct100Multinode_Cockroach-16 100 46957150 ns/op 2767293 B/op 23394 allocs/op
BenchmarkInsertDistinct100Multinode_Cockroach-16 2000 1023041 ns/op 821077 B/op 4196 allocs/op
BenchmarkInsertDistinct100Multinode_Cockroach-16 --- FAIL: BenchmarkInsertDistinct100Multinode_Cockroach-16
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
bench_test.go:1221: pq: replay txn
I spoke too soon: in the stderr
log, we have the following exciting panic:
panic: nil clock is forbidden
goroutine 1619880 [running]:
panic(0x1745440, 0xc427428630)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/cockroachdb/cockroach/pkg/rpc.NewContext(0x0, 0x0, 0x0, 0x0, 0xc42dbae280, 0x0, 0xc42f988000, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/rpc/context.go:118 +0x962
github.com/cockroachdb/cockroach/pkg/sql_test.newKVNative(0xc42fafe280, 0x6, 0x19154b9)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/kv_test.go:69 +0x31f
github.com/cockroachdb/cockroach/pkg/sql_test.runKVBenchmark(0xc42fafe280, 0x19154b9, 0x6, 0x1915d5f, 0x6, 0x1)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/kv_test.go:289 +0x717
github.com/cockroachdb/cockroach/pkg/sql_test.BenchmarkKVInsert1_Native(0xc42fafe280)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/kv_test.go:322 +0x5e
testing.(*B).runN(0xc42fafe280, 0x1)
/usr/local/go/src/testing/benchmark.go:139 +0xaa
testing.(*B).run1.func1(0xc42fafe280)
/usr/local/go/src/testing/benchmark.go:208 +0x5a
created by testing.(*B).run1
/usr/local/go/src/testing/benchmark.go:209 +0x7f
Okay, the nil clock panic is fixed but the above failures remain.
Tracking the above failures in #10551.
After ignoring the problems in #10551, a new set of problems turned up. The sql benchmarks now pass, but the storage benchmarks seem to hang indefinitely on the very first iteration of BenchmarkReplicaSnapshot-16
. stderr
is pretty sparse:
I161110 01:45:45.896322 1 rand.go:76 Random seed: -8566366354663296610
I161110 01:45:45.901316 44 gossip/gossip.go:237 [n?] initial resolvers: []
W161110 01:45:45.901389 44 gossip/gossip.go:1055 [n?] no resolvers found; use --join to specify a connected node
I161110 01:45:45.901466 44 base/node_id.go:62 NodeID set to 1
I161110 01:45:45.901753 44 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161110 01:45:45.924072 51 storage/replica_proposal.go:328 [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 197
0-01-01 00:00:00 +0000 UTC 410761h45m46.820924177s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [phys
icalTime=2016-11-10 01:45:45.923900995 +0000 UTC]
I161110 01:46:02.682472 44 storage/replica_raftstorage.go:445 [s1,r1/1:/M{in-ax}] generated snapshot ffacc4de at i
ndex 58289 in 209.989µs.
I161110 01:46:02.682550 44 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161110 01:46:02.687700 101 gossip/gossip.go:237 [n?] initial resolvers: []
W161110 01:46:02.687724 101 gossip/gossip.go:1055 [n?] no resolvers found; use --join to specify a connected node
I161110 01:46:02.687763 101 base/node_id.go:62 NodeID set to 1
I161110 01:46:02.687814 101 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161110 01:46:02.690377 110 storage/replica_proposal.go:328 [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 19
70-01-01 00:00:00 +0000 UTC 410761h46m3.589595495s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [phys
icalTime=2016-11-10 01:46:02.690335401 +0000 UTC]
I161110 01:46:19.191801 101 storage/replica_raftstorage.go:445 [s1,r1/1:/M{in-ax}] generated snapshot 43717985 at
index 58288 in 143.803µs.
E161110 01:56:02.691713 189 storage/queue.go:579 [replicate,s1,r1/1:/M{in-ax}] purgatory: 0 of 0 stores with an at
tribute matching []; likely not enough nodes in cluster
E161110 02:06:02.692100 349 storage/queue.go:646 [replicate] 1 replicas failing with "0 of 0 stores with an attrib
ute matching []; likely not enough nodes in cluster"
E161110 02:16:02.692040 349 storage/queue.go:646 [replicate] 1 replicas failing with "0 of 0 stores with an attrib
ute matching []; likely not enough nodes in cluster"
E161110 02:26:02.692106 349 storage/queue.go:646 [replicate] 1 replicas failing with "0 of 0 stores with an attrib
ute matching []; likely not enough nodes in cluster"
These errors repeat every ten minutes.
The nightly benchmarks pass most of the time now. Closing this in favor of more targeted issues for occasional failures.
The nightly benchmark seems to have stalled during the
sql.test
portion. Thesql.test.stderr
log is growing forever with the following log lines:The test invocation on TeamCity is here: https://teamcity.cockroachdb.com/viewLog.html?buildId=5068&buildTypeId=Cockroach_BenchmarkTests&tab=buildLog he nightly benchmark seems to have stalled during the
sql.test
portion. Thesql.test.stderr
log is growing forever with the following log lines:The test invocation on TeamCity is here: https://teamcity.cockroachdb.com/viewLog.html?buildId=5068&buildTypeId=Cockroach_BenchmarkTests&tab=buildLog
UPDATE: current issue is that
sql.BenchmarkPgbenchExec_{Cockroach,Postgres}
fail in the absence ofpgbench
, which we do not install anywhere.