Closed Gerold103 closed 1 year ago
Sometimes storage_1_b doesn't upgrade its schema.
2022-05-23T22:13:09.5933864Z upgrade/upgrade.test.lua [ fail ] 2022-05-23T22:13:09.5934141Z 2022-05-23T22:13:09.5935780Z Test failed! Result content mismatch: 2022-05-23T22:13:09.5942229Z --- upgrade/upgrade.result Mon May 23 22:08:57 2022 2022-05-23T22:13:09.5942946Z +++ /home/runner/work/vshard/vshard/test/var/rejects/upgrade/upgrade.reject Mon May 23 22:13:09 2022 2022-05-23T22:13:09.5943896Z @@ -180,11 +180,10 @@ 2022-05-23T22:13:09.5944189Z | ... 2022-05-23T22:13:09.5944557Z box.space._schema:get({'vshard_version'}) 2022-05-23T22:13:09.5944879Z | --- 2022-05-23T22:13:09.5989957Z - | - ['vshard_version', 0, 1, 16, 0] 2022-05-23T22:13:09.5991391Z | ... 2022-05-23T22:13:09.5991745Z vshard.storage.internal.schema_current_version() 2022-05-23T22:13:09.5993548Z | --- 2022-05-23T22:13:09.5994264Z - | - '{0.1.16.0}' 2022-05-23T22:13:09.5995001Z + | - '{0.1.15.0}' 2022-05-23T22:13:09.5995692Z | ... 2022-05-23T22:13:09.5996520Z vshard.storage.internal.schema_latest_version 2022-05-23T22:13:09.5997201Z | --- 2022-05-23T22:13:09.5997914Z 2022-05-23T22:13:09.5998270Z Last 15 lines of Tarantool Log file [Instance "box"][/home/runner/work/vshard/vshard/test/var/001_upgrade/box.log]: 2022-05-23T22:13:09.5999144Z 2022-05-23 22:13:07.730 [6394] main/101/box I> assigned id 1 to replica 5e6cfaff-6501-40ce-933e-d3da41253e71 2022-05-23T22:13:09.5999889Z 2022-05-23 22:13:07.730 [6394] main/101/box I> cluster uuid 4d77faa2-08ec-45c0-8070-30db701e2a4b 2022-05-23T22:13:09.6000668Z 2022-05-23 22:13:07.732 [6394] snapshot/101/main I> saving snapshot `/home/runner/work/vshard/vshard/test/var/001_upgrade/box/00000000000000000000.snap.inprogress' 2022-05-23T22:13:09.6001154Z 2022-05-23 22:13:07.734 [6394] snapshot/101/main I> done 2022-05-23T22:13:09.6002603Z 2022-05-23 22:13:07.734 [6394] main/101/box I> ready to accept requests 2022-05-23T22:13:09.6003335Z 2022-05-23 22:13:07.734 [6394] main/108/checkpoint_daemon I> started 2022-05-23T22:13:09.6004180Z 2022-05-23 22:13:07.734 [6394] main/108/checkpoint_daemon I> scheduled the next snapshot at Mon May 23 23:52:50 2022 2022-05-23T22:13:09.6018114Z 2022-05-23 22:13:07.735 [6394] main/113/console/::1:12142 I> started 2022-05-23T22:13:09.6018564Z 2022-05-23 22:13:07.735 [6394] main C> entering the event loop 2022-05-23T22:13:09.6018878Z Previous HEAD position was e42d3e3 doc: create 0.1.20 changelog 2022-05-23T22:13:09.6019229Z HEAD is now at 79a4dbf Improve compatibility with 1.9 2022-05-23T22:13:09.6019792Z 2022-05-23 22:13:08.296 [6394] main/115/console/::1:47504 I> Waiting until slaves are connected to a master 2022-05-23T22:13:09.6020355Z 2022-05-23 22:13:08.301 [6394] main/115/console/::1:47504 I> Slaves are connected to a master "storage_1_a" 2022-05-23T22:13:09.6020905Z 2022-05-23 22:13:08.301 [6394] main/115/console/::1:47504 I> Waiting until slaves are connected to a master 2022-05-23T22:13:09.6021446Z 2022-05-23 22:13:08.406 [6394] main/115/console/::1:47504 I> Slaves are connected to a master "storage_2_a" 2022-05-23T22:13:09.6021826Z Reproduce file /home/runner/work/vshard/vshard/test/var/reproduce/001_upgrade.list.yaml 2022-05-23T22:13:09.6022156Z --- 2022-05-23T22:13:09.6022442Z - [upgrade/upgrade.test.lua, null] 2022-05-23T22:13:09.6022672Z ...
Logs don't tell much. Happens on 1.10, I could only reproduce it in CI, disappears after some re-runs.
Sometimes storage_1_b doesn't upgrade its schema.
Logs don't tell much. Happens on 1.10, I could only reproduce it in CI, disappears after some re-runs.