kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.5k stars 1.56k forks source link

Unable to access API server for a cluster created with Podman 3.3.x on macOS #2445

Open day0ops opened 3 years ago

day0ops commented 3 years ago

What happened: Unable to get a Kind cluster going on Mac OS X using podman 3.3.1. Podman server is running in Fedora 34 using Vagrant.

Vagrant.configure("2") do |config|
  config.vm.box = "fedora/34-cloud-base"

  config.vm.provider "virtualbox" do |vb|
    vb.memory = "4096"
  end

  config.vm.network :private_network, ip: "192.168.33.11"

end

When creating kind cluster It says API server isnt available.

❯ sudo kind create cluster --name test
enabling experimental podman provider
Cgroup controller detection is not implemented for Podman. If you see cgroup-related errors, you might need to set systemd property "Delegate=yes", see https://kind.sigs.k8s.io/docs/user/rootless/
Creating cluster "test" ...
 ✓ Ensuring node image (kindest/node:v1.22.1) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-test"
You can now use your cluster with:

kubectl cluster-info --context kind-test

Thanks for using kind! 😊

❯ sudo kubectl cluster-info --context kind-test
Password:

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:52358 was refused - did you specify the right host or port?

For reference:

❯ podman info
host:
  arch: amd64
  buildahVersion: 1.22.3
  cgroupControllers: []
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.29-2.fc34.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: '
  cpus: 1
  distribution:
    distribution: fedora
    version: "34"
  eventLogger: journald
  hostname: fedora
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.11.12-300.fc34.x86_64
  linkmode: dynamic
  memFree: 3180957696
  memTotal: 4116279296
  ociRuntime:
    name: crun
    package: crun-1.0-1.fc34.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.0
      commit: 139dc6971e2f1d931af520188763e984d6cdfbf8
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc34.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
  swapFree: 0
  swapTotal: 0
  uptime: 37m 58.92s
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/vagrant/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.7.1-1.fc34.x86_64
      Version: |-
        fusermount3 version: 3.10.2
        fuse-overlayfs: version 1.7.1
        FUSE library version 3.10.2
        using FUSE kernel interface version 7.31
  graphRoot: /home/vagrant/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 0
  runRoot: /run/user/1000/containers
  volumePath: /home/vagrant/.local/share/containers/storage/volumes
version:
  APIVersion: 3.3.1
  Built: 1630356396
  BuiltTime: Mon Aug 30 20:46:36 2021
  GitCommit: ""
  GoVersion: go1.16.6
  OsArch: linux/amd64
  Version: 3.3.1

What you expected to happen: Be able to create a cluster in rootless mode and access the API server

How to reproduce it (as minimally and precisely as possible): As above in a Vagrant environment (Fedora 34),

sudo dnf --enablerepo=updates-testing install podman libvarlink-util libvarlink
systemctl --user enable --now podman
sudo loginctl enable-linger $USER

Environment:

Server: Version: 3.3.1 API Version: 3.3.1 Go Version: go1.16.6 Built: Tue Aug 31 08:46:36 2021 OS/Arch: linux/amd64


- OS (e.g. from `/etc/os-release`): Mac OS X
BenTheElder commented 3 years ago

If the cluster came up to this point then the api server is running (this is checked and the endpoint is used) so that only leaves the port forwarding. Podman is responsible for the port forwarding. Kind just tells it to setup a forward like 127.0.0.1:$random_port -> container:$apiserver_port and then places the port detected into kubeconfig. If that port is not reachable on loopback something is wrong with podman on your host.

BenTheElder commented 3 years ago

cc @aojea broken podman networking ...

aojea commented 3 years ago

is this podman remote? or running kind inside the VM?

day0ops commented 3 years ago

@aojea so the setup is Vagrant with running with vbox. Podman server running in Fedora 34. Podman client and kind running locally

day0ops commented 3 years ago

The only thing is I have both enabled IPv4 and IPv6 on my local machine (Mac OS X). So I suppose that would mean the kind cluster by default will also be dual stack If i don't specially disable a family right ?

BenTheElder commented 3 years ago

If the client and kind are local then your kubectl will only be able to access the cluster if you do your own port forwarding from the host to the VM. KIND isn't responsible for the VM setup and since it creates local clusters it binds the port forward from the host running the container to the apiserver in the node container to the loopback IP. This limitation will apply to other containers forwarding ports when running podman in this way.

Alternatively you can configure the cluster / kind to bind to a non local address. https://kind.sigs.k8s.io/docs/user/configuration/#api-server

BenTheElder commented 3 years ago

SSH port forwarding is one plausible option but you will need to get the port from the kubeconfig or similar and do the forward yourself. I don't know if podman intends to support forwarding to the actual host when using podman machine but docker desktop does do this.

toanju commented 3 years ago

Hi,

I just ran into the same problem and switched to check if I could use kind. I used the following config that adds an additional port (just changing the apiServerAddress did not work as proposed by @BenTheElder in https://github.com/kubernetes-sigs/kind/issues/2445#issuecomment-917333218):

% cat kind.cfg
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 6443
    hostPort: 6443
    listenAddress: "0.0.0.0"

In addition I had to fix the .kube/config:

>     server: https://:6443
<     server: https://127.0.0.1:6443

This way I got the following result:

% kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Wasn't going any further from here, none the less hope this helps.

jstaf commented 2 years ago

I was able to get this working on macOS using the new podman machine functionality (no Vagrant):

# install and initialize podman
brew install podman
podman machine init --cpus=4 --memory=8096 --disk=50

# manually add helper_binaries_dir to ~/.config/containers/containers.conf
[engine]
  helper_binaries_dir = ["/Users/jstafford/homebrew/Cellar/podman/3.4.2/libexec/", "/Users/jstafford/homebrew/Cellar/podman/3.4.2/bin/"]

# start podman and set the connection to the root user
podman machine start
podman system connection default podman-machine-default-root

# install kind
brew install kind

# setup a kind cluster
export KIND_EXPERIMENTAL_PROVIDER=podman
kind create cluster --config=<(echo '---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "0.0.0.0"
')

# fix kubeconfig http://: url
sed -i '' 's/https:\/\/:/https:\/\/localhost:/g' ~/.kube/config
Missxiaoguo commented 2 years ago

I was able to get this working on macOS using the new podman machine functionality (no Vagrant):

# install and initialize podman
brew install podman
podman machine init --cpus=4 --memory=8096 --disk=50

# manually add helper_binaries_dir to ~/.config/containers/containers.conf
[engine]
  helper_binaries_dir = ["/Users/jstafford/homebrew/Cellar/podman/3.4.2/libexec/", "/Users/jstafford/homebrew/Cellar/podman/3.4.2/bin/"]

# start podman and set the connection to the root user
podman machine start
podman system connection default podman-machine-default-root

# install kind
brew install kind

# setup a kind cluster
export KIND_EXPERIMENTAL_PROVIDER=podman
kind create cluster --config=<(echo '---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "0.0.0.0"
')

# fix kubeconfig http://: url
sed -i '' 's/https:\/\/:/https:\/\/localhost:/g' ~/.kube/config

Thanks! Works for me!

moshevayner commented 2 years ago

I was able to get this working on macOS using the new podman machine functionality (no Vagrant):

# install and initialize podman
brew install podman
podman machine init --cpus=4 --memory=8096 --disk=50

# manually add helper_binaries_dir to ~/.config/containers/containers.conf
[engine]
  helper_binaries_dir = ["/Users/jstafford/homebrew/Cellar/podman/3.4.2/libexec/", "/Users/jstafford/homebrew/Cellar/podman/3.4.2/bin/"]

# start podman and set the connection to the root user
podman machine start
podman system connection default podman-machine-default-root

# install kind
brew install kind

# setup a kind cluster
export KIND_EXPERIMENTAL_PROVIDER=podman
kind create cluster --config=<(echo '---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "0.0.0.0"
')

# fix kubeconfig http://: url
sed -i '' 's/https:\/\/:/https:\/\/localhost:/g' ~/.kube/config

@jstaf You're a ROCK STAR! Thank you for sharing that. I can confirm that this worked for me as well. macOS Monterey (12.1) Podman 3.4.4 Kind 0.11.1

codematix commented 2 years ago

I was able to get this working on macOS using the new podman machine functionality (no Vagrant):

# install and initialize podman
brew install podman
podman machine init --cpus=4 --memory=8096 --disk=50

# manually add helper_binaries_dir to ~/.config/containers/containers.conf
[engine]
  helper_binaries_dir = ["/Users/jstafford/homebrew/Cellar/podman/3.4.2/libexec/", "/Users/jstafford/homebrew/Cellar/podman/3.4.2/bin/"]

# start podman and set the connection to the root user
podman machine start
podman system connection default podman-machine-default-root

# install kind
brew install kind

# setup a kind cluster
export KIND_EXPERIMENTAL_PROVIDER=podman
kind create cluster --config=<(echo '---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "0.0.0.0"
')

# fix kubeconfig http://: url
sed -i '' 's/https:\/\/:/https:\/\/localhost:/g' ~/.kube/config

@jstaf You're a ROCK STAR! Thank you for sharing that. I can confirm that this worked for me as well. macOS Monterey (12.1) Podman 3.4.4 Kind 0.11.1

This worked for me too! Thank you @jstaf

Same specs: