tarantool / tt

Command-line utility to manage Tarantool applications
Other
101 stars 12 forks source link

`tt replicaset upgrade`: support remote replica set #1030

Closed mandesero closed 1 day ago

mandesero commented 1 week ago

This patch adds the ability to update the schema on a remote replica set.


Example

Start vshard app.

config.yaml ```yaml credentials: users: client: password: 'secret' roles: [super] replicator: password: 'secret' roles: [replication] storage: password: 'secret' roles: [sharding] iproto: advertise: peer: login: replicator sharding: login: storage sharding: bucket_count: 3000 groups: storages: app: module: storage sharding: roles: [storage] replication: failover: manual replicasets: storage-001: leader: storage-001-a instances: storage-001-a: iproto: listen: - uri: localhost:3301 storage-001-b: iproto: listen: - uri: localhost:3302 storage-002: leader: storage-002-a instances: storage-002-a: iproto: listen: - uri: localhost:3303 storage-002-b: iproto: listen: - uri: localhost:3304 routers: app: module: router sharding: roles: [router] replicasets: router-001: instances: router-001-a: iproto: listen: - uri: localhost:3305 ```

To update the schema of this cluster, you need to update each replica set individually. Simply select one instance of each replica set and run:

$ tt replicates upgrade tcp://client:secret@127.0.0.1:3301
  • storage-001: ok
$ tt replicates upgrade tcp://client:secret@127.0.0.1:3304
  • storage-002: ok
$ tt replicates upgrade tcp://client:secret@127.0.0.1:3305
  • router-001: ok

Closes #968

mandesero commented 1 week ago

https://github.com/tarantool/tt/blob/a1c2065941ca1cc1308412f989d7211a251fd73a/cli/replicaset/cmd/upgrade.go#L152-L165

This point was already discussed (in the previous patch), but I found that the Discovery mechanism cannot determine the mode of all instances in a replica set (although it should) using the uri.

For example, I try to run tt replicates status to demonstrate the problem:

$ tt replicaset status tcp://client:secret@127.0.0.1:3301
Orchestrator:      centralized config
Replicasets state: bootstrapped

• storage-001
  Failover: manual
    • storage-001-a localhost:3301 rw
    • storage-001-b localhost:3302 unknown

But it knows the correct address localhost:3302 to connect to this instance. I think this is a bug. Without the uri everything is fine:

$ tt replicaset status new-app:storage-001-a
Orchestrator:      centralized config
Replicasets state: bootstrapped

• router-001
  Failover: off
  Master:   single
    • router-001-a localhost:3305 rw
• storage-001
  Failover: manual
  Master:   single
    • storage-001-a localhost:3301 rw
    • storage-001-b localhost:3302 read
• storage-002
  Failover: manual
  Master:   single
    • storage-002-a localhost:3303 rw
    • storage-002-b localhost:3304 read
oleg-jukovec commented 4 days ago

Please rebase on the master branch.