Closed Overbryd closed 4 years ago
I ran into this issue, too. It looks like #5744 introduced the securityGroupOverride
feature, so looking through that pull request might be a good starting point for a bugfix.
I agree with this. Based on the doc here https://github.com/kubernetes/kops/blob/master/docs/security_groups.md it mentions to use the --lifecycle-overrides which will keep kops from touching the SG's or SG rules. kops update cluster ${CLUSTER_NAME} --yes --lifecycle-overrides SecurityGroup=ExistsAndWarnIfChanges,SecurityGroupRule=ExistsAndWarnIfChanges
I think if using the override it should account for that and just leave them alone all together outside of the additional syntax for the update.
Facing this problem as well. I don't want kops
to leave default access to API and SSH to be open for everyone: 0.0.0.0/0
. When I modify security groups after cluster creation to my value, it's OK until the next kops update
is executed, which overrides my value with default: 0.0.0.0/0
. However when kops update
is run with: --lifecycle-overrides SecurityGroup=ExistsAndWarnIfChanges,SecurityGroupRule=ExistsAndWarnIfChanges
those rules gets actually removed. I would expect that kops
would simply ignore security groups update, since they're now managed by myself.
@tomekit Exactly the same problem on my side do you have any idea for a work around regarding this issue?
Ok I wanted to bump this because it's also blowing away my ingress and egress rules on a securitygroup when I specify the override. I am definitely no go expert, but my suspicion is that the issue is somewhere along the lines of here: https://github.com/kubernetes/kops/blob/a8b0e1b2745b431edae3ba3c105ae221a3b373da/upup/pkg/fi/cloudup/awstasks/securitygroup.go#L85 . If feels like if that lifecycleoverrides flag is set for security groups that should be set to an empty list?
I think I can explain my 443 rules getting removed by a combo of the above and: https://github.com/kubernetes/kops/blob/b2d90fd2c0e466a48ff5ebbfca472564f21b21a7/pkg/model/awsmodel/api_loadbalancer.go#L175
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
By the way, if all you want to do is change the IP addresses with SSH & API access, you can set that using kops edit cluster
under kubernetesApiAccess
and sshAccess
. Set them to some more specific IP address(es).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/reopen
@bzuelke: You can't reopen an issue/PR unless you authored it or you are a collaborator.
noo this is something that should be looked at.
/reopen
@rifelpet: Reopened this issue.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
@Lampino I've noticed your question too late. I didn't solve Kubernetes overriding my SG rules, however I did found the configuration option for the both SSH and API access, so it's now actually kops managing these rules:
sshAccess:
- <ip.ip.ip.ip>/32
kubernetesApiAccess:
- <ip.ip.ip.ip>/32
1. What
kops
version are you running? The commandkops version
, will display this information.2. What Kubernetes version are you running?
kubectl version
will print the version if a cluster is running or provide the Kubernetes version specified as akops
flag.3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
5. What happened after the commands executed?
Kops is removing exiting SecurityGroupRules of the ELB for the Kubernetes API although those security groups are explicitly managed by us (in a separate terraform state).
6. What did you expect to happen?
Kops should leave the SecurityGroupRules as they are. We manage them. Every update to the cluster tampers with the security groups, making the ELB for the Kubernertes API unreachable until we terraform plan & apply our rules again.
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest. You may want to remove your cluster name and other sensitive information.8. Please run the commands with most verbose logging by adding the
-v 10
flag. Paste the logs into this report, or in a gist and provide the gist link here.I created a redacted log of the above command. https://gist.github.com/Overbryd/a42c1c5995280930fb63477da81243f7
9. Anything else do we need to know?