Closed dgamanenko closed 3 years ago
Refs: https://github.com/kubernetes/kops/pull/10728
Now IAM Profile must exist on AWS before kops update
command when user specify IAM Profile. Because kops gets associated IAM Role from specified IAM Profile in update tasks.
Thinking about terraform, should the process of getting IAM Role be executed in kops-controller? (refs: https://github.com/kubernetes/kops/pull/10728#issuecomment-773585055) What do you think about this? @rifelpet @johngmyers
If a custom profile is specified in the IG spec, don't that profile and role need to have been previously created external to kops? What is the ownership model here?
Does this work with direct render? Perhaps this is a case of kops having to cache information rather than expect it to have been rendered?
If a custom profile is specified in the IG spec, don't that profile and role need to have been previously created external to kops?
No, at the moment, the profile and role must be created.
Does this work with direct render?
I think that it has the same problem even if direct render. However, we often create IAM Role and IAM Profile before kops command when using direct render, so it wasn't a problem.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
cloud provider: AWS
command: kops update cluster
When trying to create an instance with a unique own profile, the following problem occurs
After adding the instance profile manually (directly in kubernetes.tf) this step was done
kops successfully generated required resources
After that new ig can't join the cluster
The solution was a manually edit kops-controller configuration (add k8s-test001 to config and annotations) and recreate kops-controller pods
all this steps are required from Kops v1.19