Closed dokshina closed 2 years ago
Cartridge: 2.4.0 Steps to reproduce:
Join new instances
cartridge.admin_edit_topology({ replicasets = { {alias = 'r1', join_servers = { {uri = 'uri-1'}, {uri = 'uri-2'} }}, {alias = 'r2', join_servers = { {uri = 'uri-3'}, {uri = 'uri-4'} }}, }, })
Edit new instances zones:
cartridge.admin_edit_topology({ servers = { {uuid = 'uuid-1', zone = 'zone-1'}, {uuid = 'uuid-2', zone = 'zone-3'}, }, })
Second admin_edit_topology returns Peer closed error.
admin_edit_topology
Peer closed
Instance logs:
myapp.storage-1[4525]: main/169/main I> remote vclock {1: 42, 2: 1, 3: 1, 4: 1} local vclock {1: 42, 2: 1, 3: 1, 4: 1} myapp.storage-1[4525]: relay/172.22.0.5:37050/101/main I> recover from `/var/lib/tarantool/myapp.storage-1/00000000000000000000.xlog' myapp.storage-1[4525]: relay/172.22.0.5:37050/101/main coio.cc:379 !> SystemError unexpected EOF when reading from socket, called on fd 35, aka 17 myapp.storage-1[4525]: relay/172.22.0.5:37050/101/main C> exiting the relay loop myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:419 W> Committed config at vm1:3301 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:419 W> Committed config at vm1:3302 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:419 W> Committed config at vm2:3311 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:419 W> Committed config at vm3:3312 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:419 W> Committed config at vm4:3312 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:456 W> Clusterwide config updated successfully myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:274 W> Updating config clusterwide... myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:375 W> (2PC) Preparation stage... myapp.storage-1[4525]: main/169/main I> subscribed replica 2e56f774-61c7-4a6b-968b-56ce7b55a8df at fd 35, aka 172.22.0.2:3301, peer of 172.22.0.5: myapp.storage-1[4525]: main/169/main I> remote vclock {1: 42, 2: 1, 3: 1, 4: 1} local vclock {1: 42, 2: 1, 3: 1, 4: 1} myapp.storage-1[4525]: relay/172.22.0.5:37054/101/main I> recover from `/var/lib/tarantool/myapp.storage-1/00000000000000000000.xlog' myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:385 W> Prepared for config update at vm1:3301 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:385 W> Prepared for config update at vm1:3302 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:385 W> Prepared for config update at vm2:3311 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:385 W> Prepared for config update at vm3:3312 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:395 E> Error preparing for config update at vm4:3312: NetboxCallError: Peer closed stack traceback: builtin/box/net_box.lua:1148: in function '_request' builtin/box/net_box.lua:1180: in function <builtin/box/net_box.lua:1176> [C]: in function 'xpcall' .../share/tarantool/myapp/.rocks/share/tarantool/errors.lua:145: in function 'pcall' .../share/tarantool/myapp/.rocks/share/tarantool/errors.lua:372: in function 'netbox_call' ...arantool/myapp/.rocks/share/tarantool/cartridge/pool.lua:151: in function <...arantool/myapp/.rocks/share/tarant myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:435 W> (2PC) Abort stage... myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:444 W> Aborted config update at vm1:3301 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:444 W> Aborted config update at vm1:3302 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:444 W> Aborted config update at vm2:3311 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:444 W> Aborted config update at vm3:3312 myapp.storage-1[4525]: main/112/console/unix/: twophase.lua:459 E> Clusterwide config update failed
Different behaviou on different tarantool versions
Gently closing becouse of too hard to implement right test with tarantool version checking
Cartridge: 2.4.0 Steps to reproduce:
Join new instances
Edit new instances zones:
Second
admin_edit_topology
returnsPeer closed
error.Instance logs: