kubernetes-sigs / ip-masq-agent

Manage IP masquerade on nodes
Apache License 2.0
217 stars 70 forks source link

Support for IPv6 #45

Closed aramase closed 5 years ago

aramase commented 5 years ago

Add support for ipv6.

This is to add support for the dual stack feature - https://github.com/kubernetes/kubernetes/pull/78801 https://github.com/kubernetes/kubernetes/pull/79386

cc @khenidak

k8s-ci-robot commented 5 years ago

Welcome @aramase!

It looks like this is your first PR to kubernetes-incubator/ip-masq-agent 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-incubator/ip-masq-agent has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. :smiley:

khenidak commented 5 years ago

/assign @thockin

This is for masquerading, as we discussed last week

vllry commented 5 years ago

/cc

k8s-ci-robot commented 5 years ago

@vllry: GitHub didn't allow me to request PR reviews from the following users: vllry.

Note that only kubernetes-incubator members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to [this](https://github.com/kubernetes-incubator/ip-masq-agent/pull/45#issuecomment-507765047): >/cc Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
aojea commented 5 years ago

/cc @aojea

k8s-ci-robot commented 5 years ago

@aojea: GitHub didn't allow me to request PR reviews from the following users: aojea.

Note that only kubernetes-incubator members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to [this](https://github.com/kubernetes-incubator/ip-masq-agent/pull/45#issuecomment-507775319): >/cc @aojea Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
aojea commented 5 years ago

Fixes: https://github.com/kubernetes-incubator/ip-masq-agent/issues/43

aojea commented 5 years ago

@aramase I think that this will fail on systems that don't have ipv6 or ipv4 enabled, it's common to have both enabled but we should consider systems with only one protocol enabled. In addition, I think that the user experience will be better if it can configure the ipv4 and ipv6 CIDR independently.. I shared some thoughts in https://github.com/kubernetes-incubator/ip-masq-agent/issues/43 ,

aramase commented 5 years ago

@aojea Thank you for taking a look at this. The configuration takes in a CIDR list, so I thought it might be easier for users to just provide a list of all the CIDRs (ipv4/ipv6). I'm thinking maybe we can add an agent flag to enable ipv6? By default that will be set to false. Thoughts?

aojea commented 5 years ago

@aojea Thank you for taking a look at this. The configuration takes in a CIDR list, so I thought it might be easier for users to just provide a list of all the CIDRs (ipv4/ipv6).

you are totally right ... also, this way the configuration is backward compatible

I'm thinking maybe we can add an agent flag to enable ipv6? By default that will be set to false. Thoughts?

I think that´s a good idea, my thoughts are ... Users should be able to upgrade without any changes, what´s the best option? using a flag or autodetecting it from the configuration, ... ? Users should be able to use the agent for IPv4 only, IPv6 only or dual-stack mode

Regarding dual stack I have one doubt, what should happen if one of the protocols fails, let says that someone uses dual stack and the system doesn´t have the IPv6 tools and can´t install the ipv6 rules, should the agent fail or keep working with the IPv4 iptables rules?

khenidak commented 5 years ago

I think best is to add flag for ipv6 if enabled we non masq the link local. For the user provided list we don't anything, just a list of CIDRs.

vllry commented 5 years ago

/cc

k8s-ci-robot commented 5 years ago

@vllry: GitHub didn't allow me to request PR reviews from the following users: vllry.

Note that only kubernetes-incubator members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to [this](https://github.com/kubernetes-incubator/ip-masq-agent/pull/45#issuecomment-510244927): >/cc Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
aramase commented 5 years ago

I think best is to add flag for ipv6 if enabled we non masq the link local. For the user provided list we don't anything, just a list of CIDRs.

@khenidak If we add a flag, is the suggestion that we masquerade everything and if --enable-ipv6 then we non-masquerade link local? No changes to cluster-cidr makes sense. If the user provides an ipv6 in the list, then we add that in iptables.

bowei commented 5 years ago

cc: @varunmar

varunmar commented 5 years ago

I'd like a way to ensure that today's behavior can be replicated exactly after this change goes in.

Ideally, this would be the default if exactly the same flags and configs were used today and version n+1. I'd be okay with it if we had to explicitly specify a flag to return to the old behavior. I'd be much less happy if it also required a change in the default config (many customers run this addon with no additional config, it would be unfortunate if we forced them all to manage another configmap before they are ready)

I think @khenidak's suggestion is a good one - a flag named --enable-ipv6 that is false by default. If true, then write the link-local ip6 rule if the configmap is empty. If a list of CIDRs is present, then use them instead.

So now the only question is what to do if the --enable-ipv6 flag is false and a configmap with ipv6 addresses is passed in? Another question to consider is what to do if IPv6 tools are not installed on the machine (or less commonly, no ipv4 tools)

I think the answer to both should be that an error condition should be raised and the daemon must not report healthy status. The easiest way may be to turn it into an infinite crash loop till the config is fixed, but maybe there are better solutions.

Here's what I think the behavior should be:

Configuration Behavior
--enable-ipv6=false, no CIDR entries no change from today's behavior (no-masq default set of ipv4 cidrs, do not no-masq ipv6 link-local)
--enable-ipv6=false, CIDR entries contain ipv4 only no-masq the CIDR entries only
--enable-ipv6=true, no CIDR entries no-masq default set of ipv4 + no-masq link-local ipv6
--enable-ipv6=true, CIDR entries contain ipv4 and/or ipv6 no-masq CIDR entries only
--enable-ipv6=true, CIDR entries contain ipv4 only no-masq CIDR entries only (do not no-masq ipv6 link-local)
--enable-ipv6=false, CIDR entries contain ipv6 entries Error, crash
--enable-ipv6=true, no ipv6 or ip6tools on machine Error, crash
aramase commented 5 years ago

I'd be okay with it if we had to explicitly specify a flag to return to the old behavior.

By introducing the --enable-ipv6 agent flag we will retain the old behavior. Default will be false which means it'll only setup the ipv4 chain and rules.

I'd be much less happy if it also required a change in the default config (many customers run this addon with no additional config, it would be unfortunate if we forced them all to manage another configmap before they are ready)

Totally understand. We shouldn't change the default config.

I think @khenidak's suggestion is a good one - a flag named --enable-ipv6 that is false by default. If true, then write the link-local ip6 rule if the configmap is empty. If a list of CIDRs is present, then use them instead.

I think instead of checking only if the configmap with cidr exists, maybe we should check if ipv6 cidrs exists in the list? If not, then we should still write the link-local ip6 rule if ipv6 is enabled.

Totally agree with the behavior you suggested with just few changes -

Configuration Behavior
--enable-ipv6=false, no CIDR entries no change from today's behavior (no-masq default set of ipv4 cidrs, do not no-masq ipv6 link-local)
--enable-ipv6=false, CIDR entries contain ipv4 only no-masq the CIDR entries only
--enable-ipv6=true, no CIDR entries no-masq default set of ipv4 + no-masq link-local ipv6
--enable-ipv6=true, CIDR entries contain ipv4 only no-masq CIDR entries only + no-masq link-local ipv6
--enable-ipv6=true, CIDR entries contain ipv4 and ipv6 no-masq CIDR entries and no-masq link-local for IPv6 by default unless masqLinkLocalIPv6: true. (same behavior as IPv4)
--enable-ipv6=false, CIDR entries contain ipv6 entries Error, crash
--enable-ipv6=true, no ipv6 or ip6tools on machine Error, crash

Do you think this behavior is acceptable? @varunmar @khenidak

aramase commented 5 years ago

@bowei @varunmar I've added the flag and updated the PR. PTAL.

varunmar commented 5 years ago

/lgtm

k8s-ci-robot commented 5 years ago

@varunmar: changing LGTM is restricted to collaborators

In response to [this](https://github.com/kubernetes-incubator/ip-masq-agent/pull/45#issuecomment-511584676): >/lgtm > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
aramase commented 5 years ago

@bowei Can you PTAL at this PR when you get a chance?

thockin commented 5 years ago

I am buried, so I am assigning to Varun. @varunmar if you don't want it, let me know.

aramase commented 5 years ago

@thockin Thank you! @varunmar has reviewed the PR and added /lgtm but looks like a /lgtm is required from a collaborator? https://github.com/kubernetes-incubator/ip-masq-agent/pull/45#issuecomment-511584713

bowei commented 5 years ago

/approve

bowei commented 5 years ago

/lgtm

k8s-ci-robot commented 5 years ago

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: aramase, bowei, varunmar

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files: - ~~[OWNERS](https://github.com/kubernetes-incubator/ip-masq-agent/blob/master/OWNERS)~~ [bowei] Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment