vk-cs / terraform-provider-vkcs

Mozilla Public License 2.0
42 stars 13 forks source link

k8s configuration validation problem #457

Closed giggsoff closed 4 months ago

giggsoff commented 5 months ago

Environment

terraform-provider-vkcs v0.7.3

Expected result

k8s cluster deployed with no problems

Actual result

During deployment, after a long time waiting for k8s cluster, I receive:

Error: Provider produced inconsistent result after apply

When applying changes to vkcs_kubernetes_node_group.k8s-node-group, provider
"provider[\"registry.terraform.io/vk-cs/vkcs\"]" produced an unexpected new
value: Root object was present, but now absent.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Additional details

I set in manifest of node group

node_count = 1
autoscaling_enabled = true
min_nodes = 2
max_nodes = 3

Looks like provider should validate that min_nodes is less than or equal to node_count. Or ignore provided node_count and use min_nodes as a base. If both cases are not applicable, at least it should be documented.

ftersin commented 5 months ago

Hi. Thank you for the info.

Do you believe this error relates to incorrect values of node group arguments? We occasionally catch the same error with correct values.

giggsoff commented 5 months ago

Hi. Thank you for the info.

Do you believe this error relates to incorrect values of node group arguments? We occasionally catch the same error with correct values.

Hi! I catched that error several times. After that, I changed node_count to 2 in my sample above and cluster created smoothly several times. I assume that the problem is in options missmatch, because the cluster become ready regardless of error in terraform apply with only one node in node group UI (few minutes after the error).

schirevko commented 4 months ago

Hi, does the problem still persists? Using config from original post, I was not able to reproduce it locally

giggsoff commented 4 months ago

@schirevko I also cannot reproduce it. Hopefully it was resolved on backend side. Let's me close the issue.