Closed aramase closed 5 years ago
Welcome @aramase!
It looks like this is your first PR to kubernetes-incubator/ip-masq-agent 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-incubator/ip-masq-agent has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
/assign @thockin
This is for masquerading, as we discussed last week
/cc
@vllry: GitHub didn't allow me to request PR reviews from the following users: vllry.
Note that only kubernetes-incubator members and repo collaborators can review this PR, and authors cannot review their own PRs.
/cc @aojea
@aojea: GitHub didn't allow me to request PR reviews from the following users: aojea.
Note that only kubernetes-incubator members and repo collaborators can review this PR, and authors cannot review their own PRs.
@aramase I think that this will fail on systems that don't have ipv6 or ipv4 enabled, it's common to have both enabled but we should consider systems with only one protocol enabled. In addition, I think that the user experience will be better if it can configure the ipv4 and ipv6 CIDR independently.. I shared some thoughts in https://github.com/kubernetes-incubator/ip-masq-agent/issues/43 ,
@aojea Thank you for taking a look at this. The configuration takes in a CIDR list, so I thought it might be easier for users to just provide a list of all the CIDRs (ipv4/ipv6). I'm thinking maybe we can add an agent flag to enable ipv6? By default that will be set to false. Thoughts?
@aojea Thank you for taking a look at this. The configuration takes in a CIDR list, so I thought it might be easier for users to just provide a list of all the CIDRs (ipv4/ipv6).
you are totally right ... also, this way the configuration is backward compatible
I'm thinking maybe we can add an agent flag to enable ipv6? By default that will be set to false. Thoughts?
I think that´s a good idea, my thoughts are ... Users should be able to upgrade without any changes, what´s the best option? using a flag or autodetecting it from the configuration, ... ? Users should be able to use the agent for IPv4 only, IPv6 only or dual-stack mode
Regarding dual stack I have one doubt, what should happen if one of the protocols fails, let says that someone uses dual stack and the system doesn´t have the IPv6 tools and can´t install the ipv6 rules, should the agent fail or keep working with the IPv4 iptables rules?
I think best is to add flag for ipv6
if enabled we non masq the link local. For the user provided list we don't anything, just a list of CIDRs.
/cc
@vllry: GitHub didn't allow me to request PR reviews from the following users: vllry.
Note that only kubernetes-incubator members and repo collaborators can review this PR, and authors cannot review their own PRs.
I think best is to add flag for ipv6 if enabled we non masq the link local. For the user provided list we don't anything, just a list of CIDRs.
@khenidak If we add a flag, is the suggestion that we masquerade everything and if --enable-ipv6
then we non-masquerade link local? No changes to cluster-cidr makes sense. If the user provides an ipv6 in the list, then we add that in iptables.
cc: @varunmar
I'd like a way to ensure that today's behavior can be replicated exactly after this change goes in.
Ideally, this would be the default if exactly the same flags and configs were used today and version n+1. I'd be okay with it if we had to explicitly specify a flag to return to the old behavior. I'd be much less happy if it also required a change in the default config (many customers run this addon with no additional config, it would be unfortunate if we forced them all to manage another configmap before they are ready)
I think @khenidak's suggestion is a good one - a flag named --enable-ipv6 that is false by default. If true, then write the link-local ip6 rule if the configmap is empty. If a list of CIDRs is present, then use them instead.
So now the only question is what to do if the --enable-ipv6 flag is false and a configmap with ipv6 addresses is passed in? Another question to consider is what to do if IPv6 tools are not installed on the machine (or less commonly, no ipv4 tools)
I think the answer to both should be that an error condition should be raised and the daemon must not report healthy status. The easiest way may be to turn it into an infinite crash loop till the config is fixed, but maybe there are better solutions.
Here's what I think the behavior should be:
Configuration | Behavior |
---|---|
--enable-ipv6=false, no CIDR entries | no change from today's behavior (no-masq default set of ipv4 cidrs, do not no-masq ipv6 link-local) |
--enable-ipv6=false, CIDR entries contain ipv4 only | no-masq the CIDR entries only |
--enable-ipv6=true, no CIDR entries | no-masq default set of ipv4 + no-masq link-local ipv6 |
--enable-ipv6=true, CIDR entries contain ipv4 and/or ipv6 | no-masq CIDR entries only |
--enable-ipv6=true, CIDR entries contain ipv4 only | no-masq CIDR entries only (do not no-masq ipv6 link-local) |
--enable-ipv6=false, CIDR entries contain ipv6 entries | Error, crash |
--enable-ipv6=true, no ipv6 or ip6tools on machine | Error, crash |
I'd be okay with it if we had to explicitly specify a flag to return to the old behavior.
By introducing the --enable-ipv6
agent flag we will retain the old behavior. Default will be false which means it'll only setup the ipv4 chain and rules.
I'd be much less happy if it also required a change in the default config (many customers run this addon with no additional config, it would be unfortunate if we forced them all to manage another configmap before they are ready)
Totally understand. We shouldn't change the default config.
I think @khenidak's suggestion is a good one - a flag named --enable-ipv6 that is false by default. If true, then write the link-local ip6 rule if the configmap is empty. If a list of CIDRs is present, then use them instead.
I think instead of checking only if the configmap with cidr exists, maybe we should check if ipv6 cidrs exists in the list? If not, then we should still write the link-local ip6 rule if ipv6 is enabled.
Totally agree with the behavior you suggested with just few changes -
Configuration | Behavior |
---|---|
--enable-ipv6=false, no CIDR entries | no change from today's behavior (no-masq default set of ipv4 cidrs, do not no-masq ipv6 link-local) |
--enable-ipv6=false, CIDR entries contain ipv4 only | no-masq the CIDR entries only |
--enable-ipv6=true, no CIDR entries | no-masq default set of ipv4 + no-masq link-local ipv6 |
--enable-ipv6=true, CIDR entries contain ipv4 only | no-masq CIDR entries only + no-masq link-local ipv6 |
--enable-ipv6=true, CIDR entries contain ipv4 and ipv6 | no-masq CIDR entries and no-masq link-local for IPv6 by default unless masqLinkLocalIPv6: true . (same behavior as IPv4) |
--enable-ipv6=false, CIDR entries contain ipv6 entries | Error, crash |
--enable-ipv6=true, no ipv6 or ip6tools on machine | Error, crash |
Do you think this behavior is acceptable? @varunmar @khenidak
@bowei @varunmar I've added the flag and updated the PR. PTAL.
/lgtm
@varunmar: changing LGTM is restricted to collaborators
@bowei Can you PTAL at this PR when you get a chance?
I am buried, so I am assigning to Varun. @varunmar if you don't want it, let me know.
@thockin Thank you! @varunmar has reviewed the PR and added /lgtm
but looks like a /lgtm
is required from a collaborator? https://github.com/kubernetes-incubator/ip-masq-agent/pull/45#issuecomment-511584713
/approve
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: aramase, bowei, varunmar
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Add support for ipv6.
This is to add support for the dual stack feature - https://github.com/kubernetes/kubernetes/pull/78801 https://github.com/kubernetes/kubernetes/pull/79386
cc @khenidak