tarantool / vshard

The new generation of sharding based on virtual buckets
Other
100 stars 30 forks source link

flaky test: rebalancer/rebalancer.test.lua #360

Closed Serpentian closed 1 year ago

Serpentian commented 2 years ago

It hangs on both storage engines. Here're logs: linux_2.7_test-force.txt

Also failed on version 2.9

Serpentian commented 1 year ago

Sometimes, the test just fails with the following output:

rebalancer/rebalancer.test.lua                  memtx

[Instance "box_1_b" returns with non-zero exit code: 1]

Last 15 lines of Tarantool Log file [Instance "box_1_b"][/home/alver/Programming/Work/vshard/build/test/var/001_rebalancer/box_1_b.log]:
2022-08-18 20:09:48.079 [723433] main/111/main I> remote vclock {1: 47} local vclock {1: 47}
2022-08-18 20:09:48.079 [723433] main/106/gc I> wal/engine cleanup is resumed
2022-08-18 20:09:48.815 [723433] relay/127.0.0.1:51070/101/main I> recover from `/home/alver/Programming/Work/vshard/build/test/var/001_rebalancer/box_1_b/00000000000000000047.xlog'
2022-08-18 20:09:48.869 [723433] main/132/console/::1:41400 I> Starting reconfiguration of replica db778aec-267f-47bb-9347-49828232c8db
2022-08-18 20:09:48.869 [723433] main/132/console/::1:41400 I> Box has been configured
2022-08-18 20:09:48.869 [723433] main/133/lua I> Old replicaset and replica objects are outdated.
2022-08-18 20:09:49.289 [723433] main/112/applier/storage@127.0.0.1:3301 I> can't read row
2022-08-18 20:09:49.289 [723433] main/112/applier/storage@127.0.0.1:3301 coio.c:348 E> SocketError: unexpected EOF when reading from socket, called on fd 29, aka 127.0.0.1:55244, peer of 127.0.0.1:3301: Broken pipe
2022-08-18 20:09:49.289 [723433] main/112/applier/storage@127.0.0.1:3301 I> will retry every 0.10 second
2022-08-18 20:09:49.289 [723433] relay/127.0.0.1:51070/101/main coio.c:348 E> SocketError: unexpected EOF when reading from socket, called on fd 25, aka 127.0.0.1:3302: Broken pipe
2022-08-18 20:09:49.289 [723433] relay/127.0.0.1:51070/101/main I> exiting the relay loop
2022-08-18 20:09:49.321 [723433] main C> got signal 15 - Terminated
2022-08-18 20:09:49.321 [723433] main/143/iproto.shutdown I> tx_binary: stopped
Starting instance box_1_b...
Start failed: builtin/box/console.lua:918: failed to create server localhost:46990: Address already in use
[ fail ]
Gerold103 commented 1 year ago

Not much to do here, I suppose. For good it can only be solved by migration to unix sockets or completely to luatest.

Serpentian commented 1 year ago

Not much to do here, I suppose.

But hang is still up there, at least this one should be investigated.

Serpentian commented 1 year ago
2023-05-17 02:53:41.226 [23751] main/201/vshard.rebalancer_applier I> Rebalancer workers have started, wait for their termination
2023-05-17 02:53:41.227 [23751] main/127/vshard.rebalancer I> Rebalance routes are sent. Schedule next wakeup after 10.000000 seconds
2023-05-17 02:53:41.230 [23751] main/199/console/127.0.0.1:60522 I> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
2023-05-17 02:53:41.233 [23751] main/127/vshard.rebalancer I> Some buckets are not active, retry rebalancing later
2023-05-17 02:53:41.282 [23751] main/206/vshard.rebalancer_worker_5 I> 100 buckets were successfully sent to ac522f65-aa94-4134-9f64-51ee384f1a54
2023-05-17 02:53:41.282 [23751] main/201/vshard.rebalancer_applier I> Rebalancer routes are applied
2023-05-17 02:53:41.334 [23751] main/220/127.0.0.1:3302 (net.box) I> connected to 127.0.0.1:3302
2023-05-17 02:53:41.339 [23751] main/127/vshard.rebalancer I> The cluster is balanced ok. Schedule next rebalancing after 3600.000000 seconds
2023-05-17 02:53:41.443 [23751] main/127/vshard.rebalancer I> Some buckets are not active, retry rebalancing later
2023-05-17 02:53:41.550 [23751] main/127/vshard.rebalancer I> The cluster is balanced ok. Schedule next rebalancing after 3600.000000 seconds
2023-05-17 02:53:41.659 [23751] main/127/vshard.rebalancer I> The cluster is balanced ok. Schedule next rebalancing after 3600.000000 seconds
2023-05-17 02:53:41.764 [23751] main/127/vshard.rebalancer I> The cluster is balanced ok. Schedule next rebalancing after 3600.000000 seconds
2023-05-17 02:53:41.870 [23751] main/127/vshard.rebalancer I> The cluster is balanced ok. Schedule next rebalancing after 3600.000000 seconds
2023-05-17 02:53:41.976 [23751] main/127/vshard.rebalancer I> The cluster is balanced ok. Schedule next rebalancing after 3600.000000 seconds
2023-05-17 02:53:42.082 [23751] main/127/vshard.rebalancer I> The cluster is balanced ok. Schedule next rebalancing after 3600.000000 seconds
2023-05-17 02:53:42.188 [23751] main/127/vshard.rebalancer I> The cluster is balanced ok. Schedule next rebalancing after 3600.000000 seconds
2023-05-17 02:53:42.294 [23751] main/127/vshard.rebalancer I> The cluster is balanced ok. Schedule next rebalancing after 3600.000000 seconds

We should not wake up rebalancer before log.info(string.rep('a', 1000))