Open loungerider opened 4 months ago
I have the same problem
Restarting minikube helped me. This error is reproduced consistently after the first launch.
minikube delete --all
minikube start --driver=podman --container-runtime=containerd
😄 minikube v1.33.1 on Darwin 14.5 (arm64)
✨ Using the podman (experimental) driver based on user configuration
📌 Using rootless Podman driver
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.44 ...
E0619 16:56:32.427322 71545 cache.go:189] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=4000MB) ...
📦 Preparing Kubernetes v1.30.0 on containerd 1.6.31 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
minikube kubectl -- logs coredns-7db6d8ff4d-s9x75 -n kube-system
.:53
[INFO] plugin/reload: Running configuration SHA512 = 0acd057f3a0f4709031c7dfc71869eb076b357e33cc3f9e8c7bbf24d03af38ef7635b34367a89d45adab17a5391a1c2d058603c581e1c5f4a21732bf72371934
CoreDNS-1.11.1
linux/arm64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:38019 - 24897 "HINFO IN 1096134471684580472.5056541865665957661. udp 57 false 512" - - 0 6.003458595s
[ERROR] plugin/errors: 2 1096134471684580472.5056541865665957661. HINFO: read udp 10.244.0.2:59873->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:42425 - 29487 "HINFO IN 1096134471684580472.5056541865665957661. udp 57 false 512" - - 0 6.004646058s
[ERROR] plugin/errors: 2 1096134471684580472.5056541865665957661. HINFO: read udp 10.244.0.2:59708->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:33288 - 11391 "HINFO IN 1096134471684580472.5056541865665957661. udp 57 false 512" - - 0 4.004131918s
minikube stop
✋ Stopping node "minikube" ...
🛑 Powering off "minikube" via SSH ...
🛑 1 node stopped.
minikube start --driver=podman --container-runtime=containerd
😄 minikube v1.33.1 on Darwin 14.5 (arm64)
✨ Using the podman (experimental) driver based on existing profile
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.44 ...
E0619 16:58:38.405963 72269 cache.go:189] Error downloading kic artifacts: not yet implemented, see issue #8426
🔄 Restarting existing podman container for "minikube" ...
📦 Preparing Kubernetes v1.30.0 on containerd 1.6.31 ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
minikube kubectl -- logs coredns-7db6d8ff4d-s9x75 -n kube-system
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 0acd057f3a0f4709031c7dfc71869eb076b357e33cc3f9e8c7bbf24d03af38ef7635b34367a89d45adab17a5391a1c2d058603c581e1c5f4a21732bf72371934
CoreDNS-1.11.1
linux/arm64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:47118 - 59901 "HINFO IN 7265243234691078954.8742591653276844175. udp 57 false 512" NOERROR qr,rd,ra 57 0.004722185s
podman info
podman machine inspect
[
{
"ConfigDir": {
"Path": "/Users/dimir/.config/containers/podman/machine/applehv"
},
"ConnectionInfo": {
"PodmanSocket": {
"Path": "/var/folders/nr/c9zr4xxd6sxfcj2rq3z7vnb80000gn/T/podman/podman-machine-default-api.sock"
},
"PodmanPipe": null
},
"Created": "2024-06-19T16:43:23.711361+03:00",
"LastUp": "0001-01-01T00:00:00Z",
"Name": "podman-machine-default",
"Resources": {
"CPUs": 10,
"DiskSize": 100,
"Memory": 8192,
"USBs": []
},
"SSHConfig": {
"IdentityPath": "/Users/dimir/.local/share/containers/podman/machine/machine",
"Port": 62360,
"RemoteUsername": "core"
},
"State": "running",
"UserModeNetworking": true,
"Rootful": false,
"Rosetta": true
}
]
podman info
host:
arch: arm64
buildahVersion: 1.36.0
cgroupControllers:
- cpu
- io
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.10-1.fc40.aarch64
path: /usr/bin/conmon
version: 'conmon version 2.1.10, commit: '
cpuUtilization:
idlePercent: 95.55
systemPercent: 2.03
userPercent: 2.42
cpus: 10
databaseBackend: sqlite
distribution:
distribution: fedora
variant: coreos
version: "40"
eventLogger: journald
freeLocks: 2046
hostname: localhost.localdomain
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 1000000
uidmap:
- container_id: 0
host_id: 501
size: 1
- container_id: 1
host_id: 100000
size: 1000000
kernel: 6.8.11-300.fc40.aarch64
linkmode: dynamic
logDriver: journald
memFree: 5623390208
memTotal: 8297472000
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.11.0-1.20240531102943328308.main.4.g6838c50.fc40.aarch64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.12.0-dev
package: netavark-1.11.0-1.20240606174759319307.main.8.gfebe31a.fc40.aarch64
path: /usr/libexec/podman/netavark
version: netavark 1.12.0-dev
ociRuntime:
name: crun
package: crun-1.15-1.20240607090105650503.main.32.gea54402.fc40.aarch64
path: /usr/bin/crun
version: |-
crun version UNKNOWN
commit: 7cfd0aeb40e4605b6b0ee0afd9cfca80f9c5f68a
rundir: /run/user/501/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20240510.g7288448-1.fc40.aarch64
version: |
pasta 0^20240510.g7288448-1.fc40.aarch64-pasta
Copyright Red Hat
GNU General Public License, version 2 or later
<https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: true
path: /run/user/501/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.2-2.fc40.aarch64
version: |-
slirp4netns version 1.2.2
commit: 0ee2d87523e906518d34a6b423271e4826f71faf
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.3
swapFree: 0
swapTotal: 0
uptime: 0h 18m 29.00s
variant: v8
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /var/home/core/.config/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 1
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /var/home/core/.local/share/containers/storage
graphRootAllocated: 106769133568
graphRootUsed: 6796795904
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 1
runRoot: /run/user/501/containers
transientStore: false
volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
APIVersion: 5.1.1
Built: 1717459200
BuiltTime: Tue Jun 4 03:00:00 2024
GitCommit: ""
GoVersion: go1.22.3
Os: linux
OsArch: linux/arm64
Version: 5.1.1
I am seeing the same thing on Fedora 39. Minikube v1.33.1.
What Happened?
Tested on macOS
Sonoma 14.4.1
Darwin Kernel Version 23.4.0 x86_64
minikube start --addons=ingress --driver=podman --container-runtime=containerd
minikube v1.33.1 on Darwin 14.4.1
Followed the directions at https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/ and it works with podman in rootful mode. When using the podman driver in rootless mode accessing the ingress through minikube tunnel will timeout. It seems that DNS in the ingress-nginx-controller pod is not working which causes nginx to return a 504 timeout.
Also tested using a busybox pod and found the same DNS issue.
CoreDNS logs when running in rootless mode
CoreDNS logs when running in rootful mode - working
Attach the log file
log.txt
Operating System
macOS (Default)
Driver
Podman