Open petrenko-hilbert opened 1 year ago
I faced with the same issue for MongoDB sharded cluster. There are 3 kinds of hosts: mongos, mongocfg, mongod. And any attempt to remove some host in the middle of host
list comes to plan with global hosts "shuffling".
It is a problem. Also, importing state through terraform import
does not import zookeeper hosts. And the shards are ordered by network zone, not by shard number which makes the diff very large and makes my eye twitch
Hello there!
Using version 0.83.0
We're building a terragrunt module, so we implement dynamic "host" blocks in it - one for CLICKHOUSE hosts, another for ZOOKEEPER if there would be need in it
Variables look like this
So Terragrunt inputs look like this
So we have two shards and three hosts for CLICKHOUSE and mandatory three ZOOKEEPER hosts. The problem starts when we want to delete one of hosts fron the "shard2" We get the plan which tries to CHANGE one CLICKHOUSE host into ZOOKEEPER
If we try to apply that plan, we get an error
I believe the reason is that both host types use the same resource
host {}
block, so resulting changes would really look like modifying existing host, even if it changes its typeReducing number of hosts wouldn't seem as everyday nessecity, but such task still can exist. I'd suggest implementing different blocks for describing CLICKHOUSE and ZOOKEEPER hosts, for example
clickhouse_host {}
andzookeeper_host {} (optional)
Meanwhile I'd be glad to hear any workaround suggestions