kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.29k stars 4.87k forks source link

VM: Add support for AppArmor #8299

Open acoulon99 opened 4 years ago

acoulon99 commented 4 years ago

Steps to reproduce the issue:

  1. brew install minikube
  2. minikube config set vm-driver virtualbox
  3. minikube start --feature-gates AppArmor=true
  4. minikube ssh 'cat /sys/module/apparmor/parameters/enabled'

Full output of failed command:

% minikube ssh 'cat /sys/module/apparmor/parameters/enabled'
cat: /sys/module/apparmor/parameters/enabled: No such file or directory
ssh: exit status 1

Expected output:

According to Kubernetes AppArmor documentation

% minikube ssh 'cat /sys/module/apparmor/parameters/enabled'
Y

Full output of minikube start command used, if not already included:

% minikube start --feature-gates AppArmor=true
πŸ˜„  minikube v1.10.1 on Darwin 10.15.4
    β–ͺ MINIKUBE_ACTIVE_DOCKERD=minikube
✨  Using the virtualbox driver based on user configuration
πŸ’Ώ  Downloading VM boot image ...
    > minikube-v1.10.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.10.0.iso: 174.99 MiB / 174.99 MiB [ 100.00% 10.38 MiB p/s 17s
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ’Ύ  Downloading Kubernetes v1.18.2 preload ...
    > preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4: 525.43 MiB
πŸ”₯  Creating virtualbox VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
πŸ”₯  Deleting "minikube" in virtualbox ...
🀦  StartHost failed, but will try again: creating host: create: creating: /usr/local/bin/VBoxManage modifyvm minikube --firmware bios --bioslogofadein off --bioslogofadeout off --bioslogodisplaytime 0 --biosbootmenu disabled --ostype Linux26_64 --cpus 2 --memory 4000 --acpi on --ioapic on --rtcuseutc on --natdnshostresolver1 on --natdnsproxy1 off --cpuhotplug off --pae on --hpet on --hwvirtex on --nestedpaging on --largepages on --vtxvpid on --accelerate3d off --boot1 dvd failed:
VBoxManage: error: The machine 'minikube' is already locked for a session (or being unlocked)
VBoxManage: error: Details: code VBOX_E_INVALID_OBJECT_STATE (0x80bb0007), component MachineWrap, interface IMachine, callee nsISupports
VBoxManage: error: Context: "LockMachine(a->session, LockType_Write)" at line 531 of file VBoxManageModifyVM.cpp

πŸ”₯  Creating virtualbox VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.18.2 on Docker 19.03.8 ...
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
πŸ„  Done! kubectl is now configured to use "minikube"

❗  /usr/local/bin/kubectl is v1.15.5, which may be incompatible with Kubernetes v1.18.2.
πŸ’‘  You can also use 'minikube kubectl -- get pods' to invoke a matching version

Optional: Full output of minikube logs command:

``` % minikube logs ==> Docker <== -- Logs begin at Thu 2020-05-28 14:44:33 UTC, end at Thu 2020-05-28 14:48:06 UTC. -- May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.456800454Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.456835982Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.456902450Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.456963704Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457239323Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457290357Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457337741Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457376971Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457410832Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457443840Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457478734Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457518763Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457558644Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457631206Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457673482Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457749724Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457797800Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457833020Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.457866918Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.458029024Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.458106746Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.458145692Z" level=info msg="containerd successfully booted in 0.004826s" May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.468094241Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.468121853Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.468135601Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.468147971Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.469020295Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.469044072Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.469055379Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.469062966Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.725072696Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.725102625Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.725136992Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.725144591Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.725155043Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.725159966Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.725309622Z" level=info msg="Loading containers: start." May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.792370144Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.818203430Z" level=info msg="Loading containers: done." May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.834803636Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.834924354Z" level=info msg="Daemon has completed initialization" May 28 14:44:58 minikube systemd[1]: Started Docker Application Container Engine. May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.848906042Z" level=info msg="API listen on /var/run/docker.sock" May 28 14:44:58 minikube dockerd[2573]: time="2020-05-28T14:44:58.851460492Z" level=info msg="API listen on [::]:2376" May 28 14:45:07 minikube dockerd[2573]: time="2020-05-28T14:45:07.507334018Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/751ac42fda7903bb740c017b5678d62abb6d32728621fe2eadc0cefc15135d59/shim.sock" debug=false pid=3471 May 28 14:45:07 minikube dockerd[2573]: time="2020-05-28T14:45:07.530781075Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/827dbbf4ca612fe7aaa32d252bc9020867e808bd0cc35dd8b621d09557338a67/shim.sock" debug=false pid=3483 May 28 14:45:07 minikube dockerd[2573]: time="2020-05-28T14:45:07.536050543Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4193838e781caa6cd037ed569fc574941a514c1d51e065b95249338aad9e14ba/shim.sock" debug=false pid=3490 May 28 14:45:07 minikube dockerd[2573]: time="2020-05-28T14:45:07.571241257Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f387955437087f7c6fc7e5f7178461297270871052794961eb6da7151b10c547/shim.sock" debug=false pid=3516 May 28 14:45:07 minikube dockerd[2573]: time="2020-05-28T14:45:07.773997638Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/722be1b0feefd597764aad07ca640ac83b0c43558333ce096c7c1624cffb9a63/shim.sock" debug=false pid=3639 May 28 14:45:07 minikube dockerd[2573]: time="2020-05-28T14:45:07.776961414Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/801b467c1893c7f3dd361f8a2eac4c619b5264734c2978c6ac83d59cd8df4d4f/shim.sock" debug=false pid=3643 May 28 14:45:07 minikube dockerd[2573]: time="2020-05-28T14:45:07.800629854Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0bce83925598a2503f37e41c72727337f1a3af3be866e7fa2e58f4edbc5a9ca7/shim.sock" debug=false pid=3663 May 28 14:45:07 minikube dockerd[2573]: time="2020-05-28T14:45:07.807678437Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1d8f5410958af49994a04e17fb4a02c941fa2a0683106c8122a27201e4d748e6/shim.sock" debug=false pid=3670 May 28 14:45:22 minikube dockerd[2573]: time="2020-05-28T14:45:22.818299620Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/909ebcceb8c3e2c3bdcfc7a102d8022b679dca10f3bef736a72e1b9c2574d8c5/shim.sock" debug=false pid=4361 May 28 14:45:22 minikube dockerd[2573]: time="2020-05-28T14:45:22.942190833Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/594afb1af8aff4395419b1d6d54571289d9ee81d716bc085d0675a6bd433f99c/shim.sock" debug=false pid=4379 May 28 14:45:23 minikube dockerd[2573]: time="2020-05-28T14:45:23.158238024Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a71ea7e9e73d78d6719a98731550319da82e56a6c0cdb4e9dcb6f0a9dcb421c4/shim.sock" debug=false pid=4454 May 28 14:45:23 minikube dockerd[2573]: time="2020-05-28T14:45:23.232700067Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/38b7a53bd2134d5c16166ff4db09ed92abea6b82e7fdb4325364ca0a0723b242/shim.sock" debug=false pid=4482 May 28 14:45:23 minikube dockerd[2573]: time="2020-05-28T14:45:23.393366189Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/022df310a04369259b867f94bdd413f8a7d010e0bb2ecd1119ca9b4d63155d42/shim.sock" debug=false pid=4561 May 28 14:45:23 minikube dockerd[2573]: time="2020-05-28T14:45:23.476669852Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/af5d489a5e7694f84b503440c5a12dceb03301950edc232a6fcb897684e9474e/shim.sock" debug=false pid=4600 May 28 14:45:23 minikube dockerd[2573]: time="2020-05-28T14:45:23.762719155Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/217ca7f371ac9e3a2135f8208e78ea13cd33edd38d492ce1563a3711439b3179/shim.sock" debug=false pid=4720 May 28 14:45:23 minikube dockerd[2573]: time="2020-05-28T14:45:23.835231728Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c550bce254f39c5a0ce92923964b935d218102fa010fd0cfc2b1027b8529bc38/shim.sock" debug=false pid=4744 ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID c550bce254f39 67da37a9a360e 2 minutes ago Running coredns 0 af5d489a5e769 217ca7f371ac9 67da37a9a360e 2 minutes ago Running coredns 0 022df310a0436 38b7a53bd2134 4689081edb103 2 minutes ago Running storage-provisioner 0 594afb1af8aff a71ea7e9e73d7 0d40868643c69 2 minutes ago Running kube-proxy 0 909ebcceb8c3e 1d8f5410958af ace0a8c17ba90 2 minutes ago Running kube-controller-manager 0 4193838e781ca 0bce83925598a a3099161e1375 2 minutes ago Running kube-scheduler 0 f387955437087 722be1b0feefd 303ce5db0e90d 2 minutes ago Running etcd 0 751ac42fda790 801b467c1893c 6ed75ad404bdd 2 minutes ago Running kube-apiserver 0 827dbbf4ca612 ==> coredns [217ca7f371ac] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b ==> coredns [c550bce254f3] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=63ab801ac27e5742ae442ce36dff7877dcccb278 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_05_28T09_45_14_0700 minikube.k8s.io/version=v1.10.1 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 28 May 2020 14:45:11 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Thu, 28 May 2020 14:48:04 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 28 May 2020 14:45:15 +0000 Thu, 28 May 2020 14:45:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 28 May 2020 14:45:15 +0000 Thu, 28 May 2020 14:45:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 28 May 2020 14:45:15 +0000 Thu, 28 May 2020 14:45:07 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 28 May 2020 14:45:15 +0000 Thu, 28 May 2020 14:45:15 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.99.145 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 17784752Ki hugepages-2Mi: 0 memory: 3936856Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 17784752Ki hugepages-2Mi: 0 memory: 3936856Ki pods: 110 System Info: Machine ID: 85649432560d463daddd54e57dce29f2 System UUID: 94cab39c-573b-4eee-9b0b-02de2605048f Boot ID: 5f8360a9-3f49-4096-b6c0-2b17ba4579e6 Kernel Version: 4.19.107 OS Image: Buildroot 2019.02.10 Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.8 Kubelet Version: v1.18.2 Kube-Proxy Version: v1.18.2 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-66bff467f8-bwdfl 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 2m45s kube-system coredns-66bff467f8-pz72p 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 2m45s kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m52s kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 2m52s kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 2m52s kube-system kube-proxy-7jc89 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m45s kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 2m52s kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m51s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (37%) 0 (0%) memory 140Mi (3%) 340Mi (8%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 2m53s kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 2m53s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m53s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m53s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 2m52s kubelet, minikube Updated Node Allocatable limit across pods Normal NodeReady 2m52s kubelet, minikube Node minikube status is now: NodeReady Normal Starting 2m44s kube-proxy, minikube Starting kube-proxy. ==> dmesg <== [ +0.000097] 00:00:00.001957 main OS Product: Linux [ +0.000034] 00:00:00.001995 main OS Release: 4.19.107 [ +0.000033] 00:00:00.002028 main OS Version: #1 SMP Mon May 11 14:51:04 PDT 2020 [ +0.000079] 00:00:00.002061 main Executable: /usr/sbin/VBoxService 00:00:00.002062 main Process ID: 2095 00:00:00.002062 main Package type: LINUX_64BITS_GENERIC [ +0.000064] 00:00:00.002142 main 5.2.32 r132073 started. Verbose level = 0 [ +0.398546] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +4.629231] hpet1: lost 287 rtc interrupts [ +5.004032] hpet1: lost 318 rtc interrupts [ +3.870764] systemd-fstab-generator[2352]: Ignoring "noauto" for root device [ +0.077221] systemd-fstab-generator[2362]: Ignoring "noauto" for root device [ +6.052907] hpet_rtc_timer_reinit: 67 callbacks suppressed [ +0.000001] hpet1: lost 318 rtc interrupts [ +4.175761] systemd-fstab-generator[2562]: Ignoring "noauto" for root device [ +0.825901] hpet1: lost 318 rtc interrupts [ +0.709948] systemd-fstab-generator[2718]: Ignoring "noauto" for root device [ +0.467709] systemd-fstab-generator[2789]: Ignoring "noauto" for root device [ +1.028521] systemd-fstab-generator[2986]: Ignoring "noauto" for root device [May28 14:45] kauditd_printk_skb: 108 callbacks suppressed [ +6.306956] hpet_rtc_timer_reinit: 33 callbacks suppressed [ +0.000001] hpet1: lost 318 rtc interrupts [ +1.014583] systemd-fstab-generator[4048]: Ignoring "noauto" for root device [ +3.988364] hpet1: lost 318 rtc interrupts [ +5.003520] hpet1: lost 319 rtc interrupts [ +5.001795] hpet_rtc_timer_reinit: 45 callbacks suppressed [ +0.000011] hpet1: lost 318 rtc interrupts [ +5.000880] hpet_rtc_timer_reinit: 3 callbacks suppressed [ +0.000010] hpet1: lost 318 rtc interrupts [ +5.001135] hpet1: lost 318 rtc interrupts [ +5.001161] hpet1: lost 318 rtc interrupts [ +5.001878] hpet1: lost 318 rtc interrupts [ +5.000240] hpet1: lost 318 rtc interrupts [ +5.000987] hpet1: lost 319 rtc interrupts [May28 14:46] hpet1: lost 318 rtc interrupts [ +5.001088] hpet1: lost 318 rtc interrupts [ +5.001489] hpet1: lost 318 rtc interrupts [ +5.000912] hpet1: lost 319 rtc interrupts [ +5.001249] hpet1: lost 318 rtc interrupts [ +5.001036] hpet1: lost 318 rtc interrupts [ +5.000709] hpet1: lost 318 rtc interrupts [ +4.636311] NFSD: Unable to end grace period: -110 [ +0.365010] hpet1: lost 318 rtc interrupts [ +5.001314] hpet1: lost 318 rtc interrupts [ +5.000586] hpet1: lost 318 rtc interrupts [ +5.001292] hpet1: lost 318 rtc interrupts [ +5.001422] hpet1: lost 318 rtc interrupts [May28 14:47] hpet1: lost 318 rtc interrupts [ +5.002018] hpet1: lost 318 rtc interrupts [ +5.001686] hpet1: lost 318 rtc interrupts [ +5.001321] hpet1: lost 318 rtc interrupts [ +5.001461] hpet1: lost 319 rtc interrupts [ +5.001569] hpet1: lost 318 rtc interrupts [ +5.000977] hpet1: lost 318 rtc interrupts [ +5.001792] hpet1: lost 318 rtc interrupts [ +5.002787] hpet1: lost 318 rtc interrupts [ +5.000651] hpet1: lost 318 rtc interrupts [ +5.001577] hpet1: lost 318 rtc interrupts [ +5.001237] hpet1: lost 318 rtc interrupts [May28 14:48] hpet1: lost 318 rtc interrupts ==> etcd [722be1b0feef] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-05-28 14:45:08.480952 I | etcdmain: etcd Version: 3.4.3 2020-05-28 14:45:08.481086 I | etcdmain: Git SHA: 3cf2f69b5 2020-05-28 14:45:08.481113 I | etcdmain: Go Version: go1.12.12 2020-05-28 14:45:08.481123 I | etcdmain: Go OS/Arch: linux/amd64 2020-05-28 14:45:08.481198 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-05-28 14:45:08.481357 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-05-28 14:45:08.481816 I | embed: name = minikube 2020-05-28 14:45:08.481846 I | embed: data dir = /var/lib/minikube/etcd 2020-05-28 14:45:08.481913 I | embed: member dir = /var/lib/minikube/etcd/member 2020-05-28 14:45:08.481941 I | embed: heartbeat = 100ms 2020-05-28 14:45:08.481951 I | embed: election = 1000ms 2020-05-28 14:45:08.481993 I | embed: snapshot count = 10000 2020-05-28 14:45:08.482069 I | embed: advertise client URLs = https://192.168.99.145:2379 2020-05-28 14:45:08.485845 I | etcdserver: starting member 6c11f6602955c704 in cluster aad912dd043c203 raft2020/05/28 14:45:08 INFO: 6c11f6602955c704 switched to configuration voters=() raft2020/05/28 14:45:08 INFO: 6c11f6602955c704 became follower at term 0 raft2020/05/28 14:45:08 INFO: newRaft 6c11f6602955c704 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/05/28 14:45:08 INFO: 6c11f6602955c704 became follower at term 1 raft2020/05/28 14:45:08 INFO: 6c11f6602955c704 switched to configuration voters=(7787276123571078916) 2020-05-28 14:45:08.775951 W | auth: simple token is not cryptographically signed 2020-05-28 14:45:08.777536 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] 2020-05-28 14:45:08.780185 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-05-28 14:45:08.780385 I | embed: listening for metrics on http://127.0.0.1:2381 2020-05-28 14:45:08.780524 I | embed: listening for peers on 192.168.99.145:2380 2020-05-28 14:45:08.780753 I | etcdserver: 6c11f6602955c704 as single-node; fast-forwarding 9 ticks (election ticks 10) raft2020/05/28 14:45:08 INFO: 6c11f6602955c704 switched to configuration voters=(7787276123571078916) 2020-05-28 14:45:08.781089 I | etcdserver/membership: added member 6c11f6602955c704 [https://192.168.99.145:2380] to cluster aad912dd043c203 raft2020/05/28 14:45:09 INFO: 6c11f6602955c704 is starting a new election at term 1 raft2020/05/28 14:45:09 INFO: 6c11f6602955c704 became candidate at term 2 raft2020/05/28 14:45:09 INFO: 6c11f6602955c704 received MsgVoteResp from 6c11f6602955c704 at term 2 raft2020/05/28 14:45:09 INFO: 6c11f6602955c704 became leader at term 2 raft2020/05/28 14:45:09 INFO: raft.node: 6c11f6602955c704 elected leader 6c11f6602955c704 at term 2 2020-05-28 14:45:09.187395 I | etcdserver: setting up the initial cluster version to 3.4 2020-05-28 14:45:09.187933 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.99.145:2379]} to cluster aad912dd043c203 2020-05-28 14:45:09.188231 I | embed: ready to serve client requests 2020-05-28 14:45:09.189529 I | embed: serving client requests on 192.168.99.145:2379 2020-05-28 14:45:09.189636 I | embed: ready to serve client requests 2020-05-28 14:45:09.190404 N | etcdserver/membership: set the initial cluster version to 3.4 2020-05-28 14:45:09.190548 I | etcdserver/api: enabled capabilities for version 3.4 2020-05-28 14:45:09.190649 I | embed: serving client requests on 127.0.0.1:2379 ==> kernel <== 14:48:07 up 3 min, 0 users, load average: 0.33, 0.64, 0.31 Linux minikube 4.19.107 #1 SMP Mon May 11 14:51:04 PDT 2020 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.10" ==> kube-apiserver [801b467c1893] <== W0528 14:45:10.100263 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0528 14:45:10.109552 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0528 14:45:10.123342 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0528 14:45:10.125958 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0528 14:45:10.136943 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0528 14:45:10.151428 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. W0528 14:45:10.151451 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. I0528 14:45:10.165680 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0528 14:45:10.165702 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0528 14:45:10.170020 1 client.go:361] parsed scheme: "endpoint" I0528 14:45:10.170064 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0528 14:45:10.190485 1 client.go:361] parsed scheme: "endpoint" I0528 14:45:10.190638 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0528 14:45:11.555568 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0528 14:45:11.555612 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0528 14:45:11.555850 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I0528 14:45:11.556402 1 secure_serving.go:178] Serving securely on [::]:8443 I0528 14:45:11.556453 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0528 14:45:11.557158 1 crd_finalizer.go:266] Starting CRDFinalizer I0528 14:45:11.557467 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0528 14:45:11.557481 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller I0528 14:45:11.557492 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0528 14:45:11.557495 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0528 14:45:11.557872 1 autoregister_controller.go:141] Starting autoregister controller I0528 14:45:11.557887 1 cache.go:32] Waiting for caches to sync for autoregister controller I0528 14:45:11.557944 1 controller.go:86] Starting OpenAPI controller I0528 14:45:11.558017 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0528 14:45:11.558030 1 naming_controller.go:291] Starting NamingConditionController I0528 14:45:11.558125 1 establishing_controller.go:76] Starting EstablishingController I0528 14:45:11.558144 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I0528 14:45:11.558151 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0528 14:45:11.558165 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0528 14:45:11.558247 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0528 14:45:11.559217 1 available_controller.go:387] Starting AvailableConditionController I0528 14:45:11.559300 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0528 14:45:11.559342 1 controller.go:81] Starting OpenAPI AggregationController I0528 14:45:11.573400 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0528 14:45:11.573416 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister E0528 14:45:11.606291 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.145, ResourceVersion: 0, AdditionalErrorMsg: I0528 14:45:11.657653 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0528 14:45:11.657799 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller I0528 14:45:11.658484 1 cache.go:39] Caches are synced for autoregister controller I0528 14:45:11.660541 1 cache.go:39] Caches are synced for AvailableConditionController controller I0528 14:45:11.673720 1 shared_informer.go:230] Caches are synced for crd-autoregister I0528 14:45:12.556016 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0528 14:45:12.556094 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0528 14:45:12.564034 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 I0528 14:45:12.569047 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 I0528 14:45:12.569108 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I0528 14:45:12.806469 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0528 14:45:12.837906 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0528 14:45:13.028307 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.99.145] I0528 14:45:13.029134 1 controller.go:606] quota admission added evaluator for: endpoints I0528 14:45:13.033178 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I0528 14:45:14.316364 1 controller.go:606] quota admission added evaluator for: serviceaccounts I0528 14:45:14.336921 1 controller.go:606] quota admission added evaluator for: deployments.apps I0528 14:45:14.542488 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I0528 14:45:14.815088 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0528 14:45:22.435409 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps I0528 14:45:22.830895 1 controller.go:606] quota admission added evaluator for: replicasets.apps ==> kube-controller-manager [1d8f5410958a] <== I0528 14:45:20.781579 1 gc_controller.go:89] Starting GC controller I0528 14:45:20.781586 1 shared_informer.go:223] Waiting for caches to sync for GC I0528 14:45:20.931034 1 controllermanager.go:533] Started "csrcleaner" I0528 14:45:20.931246 1 cleaner.go:82] Starting CSR cleaner controller I0528 14:45:21.181303 1 controllermanager.go:533] Started "persistentvolume-expander" I0528 14:45:21.181411 1 expand_controller.go:319] Starting expand controller I0528 14:45:21.181418 1 shared_informer.go:223] Waiting for caches to sync for expand I0528 14:45:21.431193 1 controllermanager.go:533] Started "endpointslice" I0528 14:45:21.431282 1 endpointslice_controller.go:213] Starting endpoint slice controller I0528 14:45:21.431326 1 shared_informer.go:223] Waiting for caches to sync for endpoint_slice I0528 14:45:22.337151 1 controllermanager.go:533] Started "garbagecollector" I0528 14:45:22.337198 1 garbagecollector.go:133] Starting garbage collector controller I0528 14:45:22.338723 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I0528 14:45:22.338998 1 graph_builder.go:282] GraphBuilder running I0528 14:45:22.357662 1 controllermanager.go:533] Started "cronjob" I0528 14:45:22.358159 1 shared_informer.go:223] Waiting for caches to sync for resource quota I0528 14:45:22.358198 1 cronjob_controller.go:97] Starting CronJob Manager W0528 14:45:22.389025 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0528 14:45:22.392378 1 shared_informer.go:230] Caches are synced for TTL I0528 14:45:22.412767 1 shared_informer.go:230] Caches are synced for ReplicationController I0528 14:45:22.431325 1 shared_informer.go:230] Caches are synced for daemon sets I0528 14:45:22.431682 1 shared_informer.go:230] Caches are synced for endpoint_slice I0528 14:45:22.432413 1 shared_informer.go:230] Caches are synced for job I0528 14:45:22.432421 1 shared_informer.go:230] Caches are synced for PV protection I0528 14:45:22.432428 1 shared_informer.go:230] Caches are synced for persistent volume I0528 14:45:22.432572 1 shared_informer.go:230] Caches are synced for endpoint I0528 14:45:22.442417 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"728d1a8e-da4a-4aa8-a2e4-203b88332e2a", APIVersion:"apps/v1", ResourceVersion:"184", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-7jc89 E0528 14:45:22.465952 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"728d1a8e-da4a-4aa8-a2e4-203b88332e2a", ResourceVersion:"184", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726273914, loc:(*time.Location)(0x6d07200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0017076c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0017076e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001707700), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00175c200), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001707720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001707740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001707780)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0016e8f00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001758708), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00059e230), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000101d40)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001758758)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again I0528 14:45:22.466273 1 shared_informer.go:230] Caches are synced for taint I0528 14:45:22.466316 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: W0528 14:45:22.466519 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. I0528 14:45:22.467294 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I0528 14:45:22.466882 1 taint_manager.go:187] Starting NoExecuteTaintManager I0528 14:45:22.467045 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"cc188638-3cc4-4eab-bd18-fc055372ed8b", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I0528 14:45:22.476025 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I0528 14:45:22.481680 1 shared_informer.go:230] Caches are synced for expand I0528 14:45:22.481880 1 shared_informer.go:230] Caches are synced for bootstrap_signer I0528 14:45:22.482023 1 shared_informer.go:230] Caches are synced for PVC protection I0528 14:45:22.482457 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I0528 14:45:22.482499 1 shared_informer.go:230] Caches are synced for GC I0528 14:45:22.483210 1 shared_informer.go:230] Caches are synced for ReplicaSet I0528 14:45:22.531623 1 shared_informer.go:230] Caches are synced for HPA I0528 14:45:22.683229 1 shared_informer.go:230] Caches are synced for attach detach I0528 14:45:22.732439 1 shared_informer.go:230] Caches are synced for stateful set I0528 14:45:22.827733 1 shared_informer.go:230] Caches are synced for deployment I0528 14:45:22.834456 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9afe7a9f-052b-4e24-9cbe-fcf79029e203", APIVersion:"apps/v1", ResourceVersion:"179", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2 I0528 14:45:22.852469 1 shared_informer.go:230] Caches are synced for disruption I0528 14:45:22.852482 1 disruption.go:339] Sending events to api server. I0528 14:45:22.859754 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8d300b5b-6459-4d43-9603-c8ce8af62bdd", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-bwdfl I0528 14:45:22.869653 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8d300b5b-6459-4d43-9603-c8ce8af62bdd", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-pz72p I0528 14:45:22.872724 1 shared_informer.go:230] Caches are synced for namespace I0528 14:45:22.882532 1 shared_informer.go:230] Caches are synced for service account I0528 14:45:22.933157 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator E0528 14:45:23.002420 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again I0528 14:45:23.040625 1 shared_informer.go:230] Caches are synced for garbage collector I0528 14:45:23.040650 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0528 14:45:23.044298 1 shared_informer.go:230] Caches are synced for resource quota I0528 14:45:23.058483 1 shared_informer.go:230] Caches are synced for resource quota I0528 14:45:23.832986 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I0528 14:45:23.833011 1 shared_informer.go:230] Caches are synced for garbage collector ==> kube-proxy [a71ea7e9e73d] <== W0528 14:45:23.371432 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy I0528 14:45:23.384344 1 node.go:136] Successfully retrieved node IP: 192.168.99.145 I0528 14:45:23.384363 1 server_others.go:186] Using iptables Proxier. W0528 14:45:23.384369 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I0528 14:45:23.384371 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I0528 14:45:23.384673 1 server.go:583] Version: v1.18.2 I0528 14:45:23.384889 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0528 14:45:23.384911 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0528 14:45:23.385131 1 conntrack.go:83] Setting conntrack hashsize to 32768 I0528 14:45:23.387832 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0528 14:45:23.387869 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0528 14:45:23.388421 1 config.go:133] Starting endpoints config controller I0528 14:45:23.388432 1 shared_informer.go:223] Waiting for caches to sync for endpoints config I0528 14:45:23.388454 1 config.go:315] Starting service config controller I0528 14:45:23.388458 1 shared_informer.go:223] Waiting for caches to sync for service config I0528 14:45:23.488615 1 shared_informer.go:230] Caches are synced for endpoints config I0528 14:45:23.488653 1 shared_informer.go:230] Caches are synced for service config ==> kube-scheduler [0bce83925598] <== I0528 14:45:08.099897 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0528 14:45:08.100074 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0528 14:45:08.514964 1 serving.go:313] Generated self-signed cert in-memory W0528 14:45:11.597642 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0528 14:45:11.597735 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0528 14:45:11.597835 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W0528 14:45:11.598034 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0528 14:45:11.625234 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0528 14:45:11.625280 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0528 14:45:11.626208 1 authorization.go:47] Authorization is disabled W0528 14:45:11.626330 1 authentication.go:40] Authentication is disabled I0528 14:45:11.626429 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0528 14:45:11.637630 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0528 14:45:11.638050 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0528 14:45:11.637983 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0528 14:45:11.637999 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0528 14:45:11.641771 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0528 14:45:11.642143 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0528 14:45:11.642274 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0528 14:45:11.642418 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0528 14:45:11.642063 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0528 14:45:11.642105 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0528 14:45:11.642826 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0528 14:45:11.643166 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0528 14:45:11.643530 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0528 14:45:11.643765 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0528 14:45:11.644048 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0528 14:45:11.645375 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0528 14:45:11.646157 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0528 14:45:11.647025 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0528 14:45:11.648195 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0528 14:45:11.649328 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0528 14:45:11.650438 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0528 14:45:11.652029 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope I0528 14:45:14.638578 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0528 14:45:14.839200 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... I0528 14:45:14.848095 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler ==> kubelet <== -- Logs begin at Thu 2020-05-28 14:44:33 UTC, end at Thu 2020-05-28 14:48:07 UTC. -- May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.788144 4056 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.8, apiVersion: 1.40.0 May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.788495 4056 server.go:1125] Started kubelet May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.790457 4056 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.793459 4056 server.go:145] Starting to listen on 0.0.0.0:10250 May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.794279 4056 server.go:393] Adding debug handlers to kubelet server. May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.796143 4056 volume_manager.go:265] Starting Kubelet Volume Manager May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.806690 4056 desired_state_of_world_populator.go:139] Desired state populator starts to run May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.891316 4056 status_manager.go:158] Starting to sync pod status with apiserver May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.891348 4056 kubelet.go:1821] Starting kubelet main sync loop. May 28 14:45:14 minikube kubelet[4056]: E0528 14:45:14.891382 4056 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.897407 4056 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.948375 4056 kubelet_node_status.go:70] Attempting to register node minikube May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.962113 4056 kubelet_node_status.go:112] Node minikube was previously registered May 28 14:45:14 minikube kubelet[4056]: I0528 14:45:14.962236 4056 kubelet_node_status.go:73] Successfully registered node minikube May 28 14:45:14 minikube kubelet[4056]: E0528 14:45:14.993182 4056 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084438 4056 cpu_manager.go:184] [cpumanager] starting with none policy May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084456 4056 cpu_manager.go:185] [cpumanager] reconciling every 10s May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084467 4056 state_mem.go:36] [cpumanager] initializing new in-memory state store May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084596 4056 state_mem.go:88] [cpumanager] updated default cpuset: "" May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084611 4056 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]" May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.084618 4056 policy_none.go:43] [cpumanager] none policy: Start May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.101550 4056 plugin_manager.go:114] Starting Kubelet Plugin Manager May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.193573 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.195128 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.200430 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.201798 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237806 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/9fe8076cd52b0b6f9d314ae85f3e441b-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "9fe8076cd52b0b6f9d314ae85f3e441b") May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237844 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/257ccc1ffa508018717e8c29c822c1d2-kubeconfig") pod "kube-scheduler-minikube" (UID: "257ccc1ffa508018717e8c29c822c1d2") May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237866 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d346fc98293de96e68324a214e3ef34a-etcd-certs") pod "etcd-minikube" (UID: "d346fc98293de96e68324a214e3ef34a") May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237879 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a12ce7d47900c45535fca8cb6c10d153-ca-certs") pod "kube-apiserver-minikube" (UID: "a12ce7d47900c45535fca8cb6c10d153") May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237891 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a12ce7d47900c45535fca8cb6c10d153-k8s-certs") pod "kube-apiserver-minikube" (UID: "a12ce7d47900c45535fca8cb6c10d153") May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237942 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/9fe8076cd52b0b6f9d314ae85f3e441b-ca-certs") pod "kube-controller-manager-minikube" (UID: "9fe8076cd52b0b6f9d314ae85f3e441b") May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237959 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/9fe8076cd52b0b6f9d314ae85f3e441b-k8s-certs") pod "kube-controller-manager-minikube" (UID: "9fe8076cd52b0b6f9d314ae85f3e441b") May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237972 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d346fc98293de96e68324a214e3ef34a-etcd-data") pod "etcd-minikube" (UID: "d346fc98293de96e68324a214e3ef34a") May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.237987 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/a12ce7d47900c45535fca8cb6c10d153-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "a12ce7d47900c45535fca8cb6c10d153") May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.238001 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/9fe8076cd52b0b6f9d314ae85f3e441b-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "9fe8076cd52b0b6f9d314ae85f3e441b") May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.238014 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9fe8076cd52b0b6f9d314ae85f3e441b-kubeconfig") pod "kube-controller-manager-minikube" (UID: "9fe8076cd52b0b6f9d314ae85f3e441b") May 28 14:45:15 minikube kubelet[4056]: I0528 14:45:15.238020 4056 reconciler.go:157] Reconciler: start to sync state May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.449035 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler May 28 14:45:22 minikube kubelet[4056]: E0528 14:45:22.460449 4056 reflector.go:178] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object May 28 14:45:22 minikube kubelet[4056]: E0528 14:45:22.460641 4056 reflector.go:178] object-"kube-system"/"kube-proxy-token-sgsdx": Failed to list *v1.Secret: secrets "kube-proxy-token-sgsdx" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.504866 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.576726 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/000a7c45-4185-4b96-867c-af02473d00f7-kube-proxy") pod "kube-proxy-7jc89" (UID: "000a7c45-4185-4b96-867c-af02473d00f7") May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.576886 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/000a7c45-4185-4b96-867c-af02473d00f7-lib-modules") pod "kube-proxy-7jc89" (UID: "000a7c45-4185-4b96-867c-af02473d00f7") May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.576980 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-sxxhm" (UniqueName: "kubernetes.io/secret/8973aa1e-049c-482a-9bd2-280e3329b562-storage-provisioner-token-sxxhm") pod "storage-provisioner" (UID: "8973aa1e-049c-482a-9bd2-280e3329b562") May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.577067 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/000a7c45-4185-4b96-867c-af02473d00f7-xtables-lock") pod "kube-proxy-7jc89" (UID: "000a7c45-4185-4b96-867c-af02473d00f7") May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.577157 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-sgsdx" (UniqueName: "kubernetes.io/secret/000a7c45-4185-4b96-867c-af02473d00f7-kube-proxy-token-sgsdx") pod "kube-proxy-7jc89" (UID: "000a7c45-4185-4b96-867c-af02473d00f7") May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.577243 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/8973aa1e-049c-482a-9bd2-280e3329b562-tmp") pod "storage-provisioner" (UID: "8973aa1e-049c-482a-9bd2-280e3329b562") May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.875202 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.923673 4056 topology_manager.go:233] [topologymanager] Topology Admit Handler May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.982401 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-fmpvq" (UniqueName: "kubernetes.io/secret/7f4e7d00-6cf9-4872-bef2-ecbdd74e7a28-coredns-token-fmpvq") pod "coredns-66bff467f8-pz72p" (UID: "7f4e7d00-6cf9-4872-bef2-ecbdd74e7a28") May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.982435 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be3556a5-4210-488f-b4a0-17f5a66683ca-config-volume") pod "coredns-66bff467f8-bwdfl" (UID: "be3556a5-4210-488f-b4a0-17f5a66683ca") May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.982452 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-fmpvq" (UniqueName: "kubernetes.io/secret/be3556a5-4210-488f-b4a0-17f5a66683ca-coredns-token-fmpvq") pod "coredns-66bff467f8-bwdfl" (UID: "be3556a5-4210-488f-b4a0-17f5a66683ca") May 28 14:45:22 minikube kubelet[4056]: I0528 14:45:22.982534 4056 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7f4e7d00-6cf9-4872-bef2-ecbdd74e7a28-config-volume") pod "coredns-66bff467f8-pz72p" (UID: "7f4e7d00-6cf9-4872-bef2-ecbdd74e7a28") May 28 14:45:23 minikube kubelet[4056]: W0528 14:45:23.150756 4056 pod_container_deletor.go:77] Container "594afb1af8aff4395419b1d6d54571289d9ee81d716bc085d0675a6bd433f99c" not found in pod's containers May 28 14:45:23 minikube kubelet[4056]: W0528 14:45:23.277735 4056 pod_container_deletor.go:77] Container "909ebcceb8c3e2c3bdcfc7a102d8022b679dca10f3bef736a72e1b9c2574d8c5" not found in pod's containers May 28 14:45:23 minikube kubelet[4056]: W0528 14:45:23.686830 4056 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-pz72p through plugin: invalid network status for May 28 14:45:23 minikube kubelet[4056]: W0528 14:45:23.767772 4056 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-bwdfl through plugin: invalid network status for May 28 14:45:24 minikube kubelet[4056]: W0528 14:45:24.285683 4056 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-pz72p through plugin: invalid network status for May 28 14:45:24 minikube kubelet[4056]: W0528 14:45:24.314613 4056 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-bwdfl through plugin: invalid network status for ==> storage-provisioner [38b7a53bd213] <== ```
tstromberg commented 4 years ago

I don't believe the buildroot VM we have supports AppArmor at this time.

tstromberg commented 4 years ago

Evidentally there is buildroot support for this: http://lists.busybox.net/pipermail/buildroot/2018-May/222316.html

Help wanted!

afbjorklund commented 4 years ago

From what I can see, it is not enabled by default in the kernel:

# CONFIG_SECURITY_APPARMOR is not set

So it is something that needs explicitly to be enabled first:

  β”‚ Symbol: SECURITY_APPARMOR [=n]                                                                                                   β”‚  
  β”‚ Type  : bool                                                                                                                     β”‚  
  β”‚ Prompt: AppArmor support                                                                                                         β”‚  
  β”‚   Location:                                                                                                                      β”‚  
  β”‚ (2) -> Security options                                                                                                          β”‚  
  β”‚   Defined at security/apparmor/Kconfig:2                                                                                         β”‚  
  β”‚   Depends on: SECURITY [=y] && NET [=y]                                                                                          β”‚  
  β”‚   Selects: AUDIT [=y] && SECURITY_PATH [=n] && SECURITYFS [=n] && SECURITY_NETWORK [=y]                                          β”‚  
afbjorklund commented 4 years ago

Ubuntu 20.04 has this kernel config (for 5.4):

CONFIG_SECURITY_APPARMOR=y
CONFIG_SECURITY_APPARMOR_HASH=y
CONFIG_SECURITY_APPARMOR_HASH_DEFAULT=y
# CONFIG_SECURITY_APPARMOR_DEBUG is not set
CONFIG_DEFAULT_SECURITY_APPARMOR=y
fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

mariafarid2 commented 2 years ago

hello @tstromberg, is AppArmor supported now for Minikube ?, because am facing the same issue (am using Minikube v1.26.1 over Mac machine) Tried to access this link http://lists.busybox.net/pipermail/buildroot/2018-May/222316.html, but not responding.

Chinwendu20 commented 2 years ago

Any update on this?

mbana commented 1 month ago

Hi folks, could we get an update on this.