coreos / docs

Documentation for CoreOS projects
http://coreos.com/docs
Apache License 2.0
882 stars 532 forks source link

Guide for disabling ipv6 #343

Open darethas opened 10 years ago

darethas commented 10 years ago

Hi, I asked the question in irc of how to get the nic to stop assigning an ipv6 address to my machine.

My first attempted solution was to use the cloud config's write_files section and write /etc/netconfig with the ipv6 lines commented out,

write_files:
  - path: /etc/netconfig
    permissions: 0644
    owner: root
    content: |
      udp        tpi_clts      v     inet     udp     -       -
      tcp        tpi_cots_ord  v     inet     tcp     -       -
      #udp6       tpi_clts      v     inet6    udp     -       -
      #tcp6       tpi_cots_ord  v     inet6    tcp     -       -
      rawip      tpi_raw       -     inet      -      -       -
      local      tpi_cots_ord  -     loopback  -      -       -
      unix       tpi_cots_ord  -     loopback  -      -       -

however I was still getting an ipv6 address.

@philips gave an answer in a gist to write a file called 10-disable-ipv6.conf to /etc/sysctl.d/ with the content net.ipv6.conf.eth0.disable_ipv6 = 1

so the cloud-config (or user_data if already installed on bare metal) would look something like this:

write_files:
  - path: /etc/sysctl.d/10-disable-ipv6conf
    permissions: 0644
    owner: root
    content: |
      net.ipv6.conf.eth0.disable_ipv6 = 1

This issue is per request. He asked if I could file an issue so they could remember to document it.

dpetzel commented 10 years ago

Was there anything else that need to due to have IPv6 disabled? I added the info above, and the /etc/sysctl.d/10-disable-ipv6conf file is created with the proper contents, but ipv6 still seems enabled.

Thanks

marineam commented 10 years ago

To apply to all interfaces replace eth0 with 'all' and probably another line with 'default' too. Then systemd-sysctl.service will need to be restarted or the machine rebooted.

dpetzel commented 10 years ago

Thanks, turns out the path is wrong in the example (I think).. changing /etc/sysctl.d/10-disable-ipv6conf to /etc/sysctl.d/10-disable-ipv6.conf and then restarting systemd-sysctl.service seemed to do the trick

marineam commented 10 years ago

Ah, doh. Yes .conf is significant.

darethas commented 10 years ago

Whoopsies, I missed a . (dot) between ipv6 and conf

dpetzel commented 10 years ago

No worries!. So I'm testing this out as part of bootstrapping new nodes, and it doesn't appear that the file gets laid down early enough to have an affect. On a fresh node I see the file (with the proper extension) put in place but IPV6 is still enabled. a manual systemctl restart systemd-sysctl.service does the trick, but I'd like the box to come up without it being enabled.

I tried leveraging the runcmd example that is listed in the cloud-config docs, but it appears it have ignored it saying Warning: unrecognized key "runcmd" in provided cloud config - ignoring section which is supported by the CoreOS docs suggesting only a subset of the cloud-config modules are supported.

So I think I could achieve this is a oneshot unit file, but I'm wondering if there is a better way?

robszumski commented 9 years ago

What's the status of this? Does this need to be addressed in any documentation we currently have?

jumanjiman commented 9 years ago

fwiw here's how i disable ipv6 on all interfaces:

# i actually use user-data (cloud-config) to write this file:
core@ip-192-168-17-202 ~ $ cat /etc/sysctl.d/10-disable-ipv6.conf 
net.ipv6.conf.all.disable_ipv6 = 1

# apply the config
core@ip-192-168-17-202 ~ $ sudo systemctl restart systemd-sysctl

# check addresses
core@ip-192-168-17-202 ~ $ ip -6 addr show
core@ip-192-168-17-202 ~ $ ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    inet 192.168.17.202/24 brd 192.168.17.255 scope global dynamic ens3
       valid_lft 3151sec preferred_lft 3151sec
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default 
    inet 172.17.42.1/16 scope global docker0
       valid_lft forever preferred_lft forever
gswallow commented 9 years ago

I had to reboot my coreos vm after applying the changes through configdrive.iso. The first boot seems to write the file, while the second boot respects the applied changes.

Also, I disabled IPv6 (aside from the fact that it's wholly unnecessary in a 40-employee shop) because this:

Aug 05 11:24:14 localhost systemd-resolved[495]: Detected conflict on localhost IN AAAA 2604:8800:100:81a0:250:56ff:fe3f:92a5 Aug 05 11:24:14 localhost systemd-resolved[495]: Hostname conflict, changing published hostname from 'localhost' to 'localhost3'. Aug 05 11:24:14 localhost systemd-resolved[495]: Assertion 'e->key == i->next_key' failed at /build/amd64-usr/var/tmp/portage/sys-apps/systemd-220-r4/work/systemd-220/src/shared/ha

schneidexe commented 8 years ago

@gswallow to get it on first boot you can add this to your cloud-config:

coreos:
  units:
    - name: systemd-sysctl.service
      command: restart
codemedic commented 8 years ago
#cloud-config

---
write_files:
- path: "/etc/sysctl.d/10-disable-ipv6conf"
  owner: root
  content: |
    net.ipv6.conf.all.disable_ipv6=1
coreos:
  units:
  - name: systemd-sysctl.service
    command: restart

The above cloud-config does not seem to make any difference. It does create the file, but no change to the kernel config.

Whereas when I run the sysctl command manually (as below) it does work.

sysctl net.ipv6.conf.all.disable_ipv6=1

Any idea why cloud config is not taking effect?

mimmus commented 8 years ago

Peraphs you need a .conf in the filename under /etc/sysctl.d

simonvanderveldt commented 7 years ago

Peraphs you need a .conf in the filename under /etc/sysctl.d

Yeah, that's probably it. The combination of a .conf file containing the disables and a restart of sysctl works for us so far.

What net.ipv6.conf.all vs net.ipv6.conf.default does is a bit unclear, but judging by experiences from others it makes sense to disable both. See https://unix.stackexchange.com/questions/90443/sysctl-proc-sys-net-ipv46-conf-whats-the-difference-between-all-defau/90560#90560

write_files:
  - path: "/etc/sysctl.d/10-disable-ipv6.conf"
    owner: root
    content: |
      net.ipv6.conf.all.disable_ipv6=1
      net.ipv6.conf.default.disable_ipv6=1
coreos:
  units:
    - name: systemd-sysctl.service
      command: restart
marslo commented 6 years ago

I've encounter the same issue. The ipv6 has been disabled by:

$ tail -3 /etc/default/grub
# disable ipv6
GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1"
GRUB_CMDLINE_LINUX="ipv6.disable=1"

$ sudo update-grub
$ sudo reboot

And for the ip a show:

$ ip -6 addr show

$ ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 130.147.182.57/23 brd 130.147.183.255 scope global dynamic noprefixroute enp0s31f6
       valid_lft 690681sec preferred_lft 690681sec
3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 192.168.10.235/23 brd 192.168.11.255 scope global dynamic noprefixroute wlp2s0
       valid_lft 85898sec preferred_lft 85898sec
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever

The pods status:

$ kubectl --namespace=kube-system get pods
NAME                                   READY     STATUS              RESTARTS   AGE
coredns-78fcdf6894-555tm               0/1       ContainerCreating   0          1h
coredns-78fcdf6894-c7lms               0/1       ContainerCreating   0          1h
etcd-imarslo18                         1/1       Running             1          1h
kube-apiserver-imarslo18               1/1       Running             1          1h
kube-controller-manager-imarslo18      1/1       Running             1          1h
kube-flannel-ds-f8j2z                  1/1       Running             1          56m
kube-proxy-sddp2                       1/1       Running             1          1h
kube-scheduler-imarslo18               1/1       Running             1          1h
kubernetes-dashboard-6948bdb78-hh8wm   0/1       ContainerCreating   0          19m

When I describe the pods:

$ kubectl --namespace=kube-system describe pods coredns-78fcdf6894-555tm
Name:           coredns-78fcdf6894-555tm
Namespace:      kube-system
Node:           imarslo18/192.168.10.235
Start Time:     Tue, 03 Jul 2018 18:49:20 +0800
Labels:         k8s-app=kube-dns
                pod-template-hash=3497892450
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicaSet/coredns-78fcdf6894
Containers:
  coredns:
    Container ID:
    Image:         k8s.gcr.io/coredns:1.1.3
    Image ID:
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-k4xfp (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-k4xfp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-k4xfp
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                  From                Message
  ----     ------                  ----                 ----                -------
  Warning  FailedScheduling        57m (x93 over 1h)    default-scheduler   0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedCreatePodSandBox  56m                  kubelet, imarslo18  Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "819e2f4395bfa32332180480b9d72a76c287154c19fd3a748f7cfb2acecc2af7" network for pod "coredns-78fcdf6894-555tm": NetworkPlugin cni failed to set up pod "coredns-78fcdf6894-555tm_kube-system" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "819e2f4395bfa32332180480b9d72a76c287154c19fd3a748f7cfb2acecc2af7" network for pod "coredns-78fcdf6894-555tm": NetworkPlugin cni failed to teardown pod "coredns-78fcdf6894-555tm_kube-system" network: failed to get IP addresses for "eth0": <nil>]

The error shows it's looking for the open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory

My system details:

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:14:41Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

$ cat /etc/sysctl.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv6.conf.all.forwarding=0
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1

$ sudo sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
sysctl: cannot stat /proc/sys/net/ipv6/conf/all/forwarding: No such file or directory
sysctl: cannot stat /proc/sys/net/ipv6/conf/all/disable_ipv6: No such file or directory
sysctl: cannot stat /proc/sys/net/ipv6/conf/default/disable_ipv6: No such file or directory
sysctl: cannot stat /proc/sys/net/ipv6/conf/lo/disable_ipv6: No such file or directory 

$ cat /etc/sysctl.d/10-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
lucab commented 6 years ago

@Marslo that looks like a CNI bug, please report it there.

squeed commented 6 years ago

You are using an old version of CNI, which does not work on systems with ipv6 disabled. Please re-enable ipv6 or upgrade CNI.

marslo commented 6 years ago

@squeed , the kubernetes master is init this afternoon. And the the init command as blow. I don't think this should be the old version CNI. I will submit the error to CNI.

$ sudo kubeadm init --ignore-preflight-errors=all --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=130.147.182.57 --kubernetes-version=v1.11.0
I0703 18:29:14.713462    6850 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I0703 18:29:14.814406    6850 kernel_validator.go:81] Validating kernel version
I0703 18:29:14.814547    6850 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [imarslo18 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 130.147.182.57]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [imarslo18 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [imarslo18 localhost] and IPs [130.147.182.57 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 60.503799 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node imarslo18 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node imarslo18 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "imarslo18" as an annotation
[bootstraptoken] using token: 5yks9y.r8tr98s2bd1fqgiz
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!
....
....