Closed kkimdev closed 2 years ago
Thanks for opening this issue! But I think that this would rather require a completely new project, since we heavily rely on the docker API/SDK..
Since podman 2.0 supports docker compatible REST API, revisiting it should be reconsidered?
@minioin , without having a look at the podman stuff: could we just continue using the Docker SDK with the Podman endpoint?
That is the intended outcome(I'm not associated with podman). However, there could be some inconsistencies in both sides; the SDK and podman API. But they won't be found unless we start using them. I could lend a hand if you need one.
Copy/pasted here from @ https://github.com/inercia/k3x/issues/16#issuecomment-674257140 (and also talked about a little in https://github.com/inercia/k3x/issues/15):
Podman provides a Docker-like API in Podman 2.0. https://podman.io/blogs/2020/07/01/rest-versioning.html
API docs have the docker-compatible API under "compat" @ https://docs.podman.io/en/latest/_static/api.html (podman also has its own API to do additional things like handle pods)
I saw in a comment elswhere on GitHub that getting a podman service up an running is as running:
podman system service --time=0 &
export DOCKER_HOST=unix:/$XDG_RUNTIME_DIR/podman/podman.sock
That's for running podman without requiring root (in a user session), as it references $XDG_RUNTIME_DIR
.
For system containers, it's:
sudo podman system service --time=0 &
export DOCKER_HOST=unix:/run/podman/podman.sock
To start up the service and specify a special URI, such as the Docker URI, for compatibility:
sudo podman system service --time=0 unix:/var/run/docker.sock
I found out some of this in the docs for podman system service
. It's the same as running man podman-system-serice
(with podman installed). There's help at the command line too: podman system service --help
I tried to run k3d using sudo podman system service --time=0 unix:/var/run/docker.sock
. Following output was observed.
ERRO[0000] Failed to list docker networks
ERRO[0000] Failed to create cluster network
ERRO[0000] Error response from daemon: filters for listing networks is not implemented
ERRO[0000] Failed to create cluster >>> Rolling Back
INFO[0000] Deleting cluster 'k3s-default'
ERRO[0000] Failed to delete container ''
WARN[0000] Failed to delete node '': Try to delete it manually
INFO[0000] Deleting cluster network 'k3d-k3s-default'
WARN[0000] Failed to delete cluster network 'k3d-k3s-default': 'Error: No such network: k3d-k3s-default'
ERRO[0000] Failed to delete 1 nodes: Try to delete them manually
FATA[0000] Cluster creation FAILED, also FAILED to rollback changes!
I guess there will be some little things missing in the API (like the filter for network lists), but I also think that we'll get to it eventually :+1:
Hi - is podman support now available for k3d?
I'd imagine not since 4.0.0 only recently came out & this is in the 4.1.0 milestone
Hi @masterthefly , no, there's no progress on this so far. I'll happily accept any PR though, as we have some higher priorities at the moment :thinking: Thanks for chiming in @06kellyjac :+1:
Would love to contribute, how to get started? ThanksVishy
Sent from Yahoo Mail on Android
On Wed, Feb 3, 2021 at 6:05 AM, Thorsten Kleinnotifications@github.com wrote:
Hi @masterthefly , no, there's no progress on this so far. I'll happily accept any PR though, as we have some higher priorities at the moment 🤔 Thanks for chiming in @06kellyjac 👍
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
I found a compatibility bug in Podman, made a minor change in k3d and configured dnsname
to get quite far:
Oh yes, it's working!
Will submit PRs (after some cleanup) for:
ContainerExecStart
I even got tilt
running
@iwilltry42 @kkimdev @serverwentdown I have a question regarding this feature. I'm not sure if this is related/accurate, but given the move by docker for docker desktop to a paid subscription model for business, does this place a stronger emphasis on providing podman support for k3d?
https://www.docker.com/blog/updating-product-subscriptions/
I may be way off the mark, just wanted to check with the experts.
Thanks, Damian.
@damianoneill there's an open PR providing basic Podman support. I'm just setting up a development environment for myself to test with Podman. I'd also love to have plain containerd support, so it works in environments where people use e.g. nerdctl instead of docker (CLI). But many of those things come with dependency issues that need a second thought.
But yep, it's on the roadmap 👍
Thanks @iwilltry42 appreciate the update.
Damian.
I just tried to use this feature, but it fails. Installed k3d without root.
$ k3d registry create --default-network podman mycluster-registry
INFO[0000] Creating node 'k3d-mycluster-registry'
WARN[0000] Failed to get runtime information: docker failed to provide info output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
WARN[0000] Failed to get network information: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
FATA[0000] Failed to create registry: failed to create registry node 'k3d-mycluster-registry': runtime failed to create node 'k3d-mycluster-registry': failed to create container for node 'k3d-mycluster-registry': docker failed to create container 'k3d-mycluster-registry': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
$ k3d --version
k3d version v5.2.1
k3s version v1.21.7-k3s1 (default)
$ podman --version
podman version 3.3.1
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.5 (Ootpa)
@csc-felipe , please have a look into the linked PR :+1:
In my comment in that thread, I also mention how to run the compatible socket: podman system service --time=0 unix:///var/run/docker.sock
Thanks for writing that here. I had looked at your link for docs from the linked PR, but missed the discussion. Might be a good idea of adding this detail about the socket there to the docs.
By the way, Fedora and Red Hat comes with a podman-docker
package which has systemd
unit files for the socket. Worked fine here, also in the rootless option. Details are here: https://fedoramagazine.org/use-docker-compose-with-podman-to-orchestrate-containers-on-fedora/
@iwilltry42 , is your approach working in a machine without docker installed, only podman ?
@jonathanvila yep, I actually first tried it in a Podman container and then in a virtual machine with only Podman installed 👍
Edit: still it needs the mentioned change in Podman itself to be released first to work properly.
@iwilltry42 I'm trying in my Fedora 35, with only podman installed, and I have problems.
I run k3d with :
podman run -itd --restart=unless-stopped --net=host cnrancher/autok3s:v0.4.5
Then I run the system service :
sudo podman system service --time=0 unix:///var/run/docker.sock
Then I open the UI console : localhost:8080/ui/cluster-explorer
Then I create a cluster , provider = k3d, 1 master 1 worker , and I get this error
time="2021-12-22T16:27:44Z" level=error msg="[k3d] cluster ddd run failed: Failed Cluster Preparation: Failed Network Preparation: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
I assume I'm doing something wrong.
@jonathanvila You'll need to do the following (as root):
podman-docker
or do systemctl start podman.socket && ln -s /run/podman/podman.sock /var/run/docker.sock
@serverwentdown well, what I want to achieve is to use a container image directly, without installing anything.... because I want to finally use this container image in my Testcontainers test to spin up an ephemeral cluster at the beginning of my tests.
@iwilltry42 I'm trying in my Fedora 35, with only podman installed, and I have problems. I run k3d with :
podman run -itd --restart=unless-stopped --net=host cnrancher/autok3s:v0.4.5
Then I run the system service :sudo podman system service --time=0 unix:///var/run/docker.sock
Then I open the UI console : localhost:8080/ui/cluster-explorer Then I create a cluster , provider = k3d, 1 master 1 worker , and I get this errortime="2021-12-22T16:27:44Z" level=error msg="[k3d] cluster ddd run failed: Failed Cluster Preparation: Failed Network Preparation: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
I assume I'm doing something wrong.
I'd imagine autok3s with the k3d provider uses docker internally, also if you refer to the docs it requires passing through the docker socket. In this case you need to pass through the "docker socket" provided by podman.socket
Plain VM, just podman and some core tooling worked just fine as far as I can see. (Haven't got proper network forwarding on the vm to check via browser)
[root@nixos:/tmp]# readlink -f $(which docker)
/nix/store/5g77v1hv1l6dvj9d0wcxzwrn0cnbv1ap-podman-wrapper-3.4.2/bin/podman
[root@nixos:/tmp]# ls -l /var/run/docker.sock
lrwxrwxrwx root root 23 B Wed Dec 22 17:05:59 2021 /var/run/docker.sock ⇒ /run/podman/podman.sock
[root@nixos:/tmp]# docker run -itd --restart=unless-stopped --net=host -v /var/run/docker.sock:/var/run/docker.sock cnrancher/autok3s:v0.4.5
Resolving "cnrancher/autok3s" using unqualified-search registries (/home/user/.config/containers/registries.conf)
Trying to pull docker.io/cnrancher/autok3s:v0.4.5...
Getting image source signatures
Copying blob 5502cfdc125e done
Copying blob 5ff580fe0cf2 done
Copying blob ce6dd002b7a5 done
Copying blob 5764688edbce done
Copying blob c8e24b38921b done
Copying blob b9438c6fc286 done
Copying config 300ab5492f done
Writing manifest to image destination
Storing signatures
3a67e1be91e6c44616f8d50074bfb869e536a99ed1656cd0aa0e4058c12c5b88
[root@nixos:/tmp]# curl localhost:8080/
<a href="/ui/">Found</a>.
[root@nixos:/tmp]# docker ps -a --no-trunc
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a67e1be91e6c44616f8d50074bfb869e536a99ed1656cd0aa0e4058c12c5b88 docker.io/cnrancher/autok3s:v0.4.5 serve --bind-address=0.0.0.0 2 minutes ago Up 2 minutes ago zealous_booth
[root@nixos:/tmp]# docker logs zealous_booth
INFO[0000] run as daemon, listening on 0.0.0.0:8080
full nix config, if you fancy trying it download and run nixos-shell vm.nix
and log in as root
vm.nix:
{ ... }: {
virtualisation.memorySize = 1024;
virtualisation.podman.enable = true;
virtualisation.podman.dockerCompat = true;
virtualisation.podman.dockerSocket.enable = true;
}
failing on mac on this
after adding the private key and exporting the DOCKER_HOST:
export DOCKER_HOST="`cat ~/.config/containers/containers.conf | grep -ioE "ssh://root@localhost:[0-9]+"`"`
I got stuck at this point:
λ k3d cluster create test23
ERRO[0003] Failed to get nodes for cluster 'test23': docker failed to get containers with labels 'map[k3d.cluster:test23]': failed to list containers: error during connect: Get "http://docker.example.com/v1.24/containers/json?all=1&filters=%7B%22label%22%3A%7B%22app%3Dk3d%22%3Atrue%2C%22k3d.cluster%3Dtest23%22%3Atrue%7D%7D&limit=0": command [ssh -l root -p 59770 -- localhost docker system dial-stdio] has exited with exit status 125, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=Error: unrecognized command `docker system dial-stdio`
(cmd podman system dial-stdio
missing) Probably related to this https://github.com/containers/podman/issues/11397 and to the fact that podman system service --time=0 unix:/var/run/docker.sock
ends up with Error: unknown flag: --time
(?)
@jkremser Yep, you're right, containers/podman#11397 has details on why this doesn't work. It is because Podman doesn't fully implement Docker's SSH protocol yet. However, from what I can tell the Podman 4 release will include the fix for it (See containers/podman#11819)
You can work around it by doing SSH tunneling or (with caution) exposing podman over TCP.
Switched to colima that works mostly ootb. The only think I am missing there is UDP port forwarding from containers to host, if podman had this this would be a motivation to switch back (slightly off-topic)
@serverwentdown Have you got this working in podman rootless mode? I've installed podman v4 and k3d, and it works fine running in root podman. I get the following when I run rootless:
ERRO[0003] Failed Cluster Creation: failed setup of server/agent node k3d-k3s-default-server-0: failed to create node: runtime failed to create node 'k3d-k3s-default-server-0': failed to create container for node 'k3d-k3s-default-server-0': docker failed to create container 'k3d-k3s-default-server-0': Error response from daemon: container create: invalid config provided: Networks and static ip/mac address can only be used with Bridge mode networking
$ podman network ls
NETWORK ID NAME DRIVER
89a5dde53e7c k3d-k3s-default bridge
2f259bab93aa podman bridge
Complains about bridge mode networking in rootless mode, but the k3d network is bridge mode.
I'm aware that running K3d in Docker, actually running on a Podman socket reimplementing Docker API, while rootless (on ZFS, but that's another story...) is asking for a lot of stars to align, so I'm perfectly happy conceding root for this to work. No need to sweat about it if it's too deep to debug — probably single-digit count of people with this use case :smile:.
@geraldwuhoo Because Kubernetes requires at least a bridge network for clustering, k3s doesn't work without a bridge network. Bridge networks are not possible on rootless podman because it uses a userspace networking stack (SLIRP)
Hey there,
I tried running k3d today using podman 3.4.4 (rootless) on Arch and I got pretty far. Like others, running sudo podman system service --time=0 unix:///var/run/docker.sock
in the background allowed me to get past the assumption that the socket is at /var/run/docker.sock
, even though I have DOCKER_HOST
set to unix://$XDG_RUNTIME_DIR/podman/podman.sock
.
❯ k3d cluster create -c k3d.yaml
INFO[0000] Using config file k3d.yaml (k3d.io/v1alpha4#simple)
INFO[0000] Prep: Network
INFO[0000] Created network 'bridge'
INFO[0000] Created image volume k3d-MB-images
INFO[0000] Creating node 'MB'
INFO[0000] Successfully created registry 'MB'
INFO[0000] Container 'MB' is already connected to 'bridge'
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-MB-tools'
INFO[0001] Creating node 'k3d-MB-server-0'
INFO[0001] Creating node 'k3d-MB-agent-0'
INFO[0001] Creating node 'k3d-MB-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-MB-serverlb'
INFO[0002] Using the k3d-tools node to gather environment information
INFO[0002] HostIP: using network gateway 10.88.2.1 address
INFO[0002] Starting cluster 'MB'
INFO[0002] Starting servers...
INFO[0003] Starting Node 'k3d-MB-server-0'
WARN[0003] warning: encountered fatal log from node k3d-MB-server-0 (retrying 0/10): Mtime="2022-02-22T11:51:07Z" level=fatal msg="failed to find cpu cgroup (v2)"
ERRO[0004] Failed Cluster Start: Failed to start server k3d-MB-server-0: Node k3d-MB-server-0 failed to get ready: Failed waiting for log message 'k3s is up and running' from node 'k3d-MB-server-0': node 'k3d-MB-server-0' (container 'e2c5752e4633c6213440a4e667eb77ea2fc0d18b65ac0ba77c5aa667ec9eeff6') not running
ERRO[0004] Failed to create cluster >>> Rolling Back
INFO[0004] Deleting cluster 'MB'
INFO[0005] Deleting cluster network 'bridge'
INFO[0005] Deleting 2 attached volumes...
WARN[0005] Failed to delete volume 'k3d-MB-images' of cluster 'failed to find volume 'k3d-MB-images': Error: No such volume: k3d-MB-images': MB -> Try to delete it manually
FATA[0005] Cluster creation FAILED, all changes have been rolled back!
I know others are using podman v4 from source, but I just wanted to demonstrate how far I was able to get with v3, for those interested. I haven't been able to stand up the cluster, so I'm unsure whether the network configuration is being impacted since I'm running podman in rootless mode.
@geraldwuhoo Because Kubernetes requires at least a bridge network for clustering, k3s doesn't work without a bridge network. Bridge networks are not possible on rootless podman because it uses a userspace networking stack (SLIRP)
@serverwentdown While that does make sense, both kind
and minikube
have been able to get rootless podman working (e.g. https://kind.sigs.k8s.io/docs/user/rootless/), so I was wondering if in current state k3d
would be capable of it as well.
@johnhamelink It looks like you were able to get the bridge networking on rootless, do you mind posting your configuration? Do you mind creating the cluster again in verbose mode?
@geraldwuhoo My bad, I made some incorrect assumptions. Podman rootless does support creating networks
Opened #986
With the PRs above, it works but I just realised k3d
mounts /var/run/docker.sock
into the tools container, which would fail when the socket does not exist.
Also, the output kubeconfig is broken (incorrectly parses DOCKER_HOST
into https://unix:PORT
)
With the PRs above, it works but I just realised
k3d
mounts/var/run/docker.sock
into the tools container, which would fail when the socket does not exist.Also, the output kubeconfig is broken (incorrectly parses
DOCKER_HOST
intohttps://unix:PORT
)
I noticed this as well, and running in verbose mode, it appears that k3d reads an additional env var, DOCKER_SOCK
. I've never seen it mentioned anywhere (it wasn't set on my system, so it defaulted to /var/run/docker.sock
). Setting it equal to the DOCKER_HOST
(minus the unix://
prefix) "resolved" this. Not sure if this is intentional behavior or not, but it does seem strange k3d doesn't extrapolate from DOCKER_HOST
.
Even works okay if /var/run/docker.sock
is just an empty file (the image imports will fail, but the cluster will still start and work)
@johnhamelink It looks like you were able to get the bridge networking on rootless, do you mind posting your configuration? Do you mind creating the cluster again in verbose mode?
Certainly! See below from my notes - hope this is helpful!
yay -Rs docker docker-compose
yay -S podman podman-docker
Follow the guide for setting up rootless podman in The Arch Wiki
Set unqualified-search-registries = ["docker.io"]
in /etc/containers/registries.conf
Add export DOCKER_HOST="unix://$XDG_RUNTIME_DIR/podman/podman.sock"
to ~/.zshenv
and source
Run podman pull alpine
to test everything so far
podman network create foo
podman run --rm -it --network=foo docker.io/library/alpine:latest ip addr
This should return valid IPs like so :
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 6a:b6:d2:f5:61:00 brd ff:ff:ff:ff:ff:ff
inet 10.88.2.2/24 brd 10.88.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::68b6:d2ff:fef5:6100/64 scope link
valid_lft forever preferred_lft forever
Run systemctl --user start podman
, then with the following config:
---
apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: MB
servers: 1
agents: 2
registries:
create:
name: MB
hostPort: "5000"
config: |
mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
Run k3d cluster create --verbose -c k3d.yml
Which should produce the following:
❯ k3d cluster create --verbose -c k3d.yaml
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock
DEBU[0001] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:3.4.4 OSType:linux OS:arch Arch:amd64 CgroupVersion:2 CgroupDriver:systemd Filesystem:extfs}
DEBU[0001] Additional CLI Configuration:
cli:
api-port: ""
env: []
k3s-node-labels: []
k3sargs: []
ports: []
registries:
create: ""
runtime-labels: []
volumes: []
hostaliases: []
DEBU[0001] Validating file /tmp/k3d-config-tmp-k3d.yaml2874904885 against default JSONSchema...
DEBU[0001] JSON Schema Validation Result: &{errors:[] score:46}
INFO[0001] Using config file k3d.yaml (k3d.io/v1alpha3#simple)
DEBU[0001] Configuration:
agents: 2
apiversion: k3d.io/v1alpha3
image: docker.io/rancher/k3s:v1.22.6-k3s1
kind: Simple
name: MB
network: ""
options:
k3d:
disableimagevolume: false
disableloadbalancer: false
disablerollback: false
loadbalancer:
configoverrides: []
timeout: 0s
wait: true
kubeconfig:
switchcurrentcontext: true
updatedefaultkubeconfig: true
runtime:
agentsmemory: ""
gpurequest: ""
hostpidmode: false
serversmemory: ""
registries:
config: |
mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
create:
hostport: "5000"
name: MB
use: []
servers: 1
subnet: ""
token: ""
WARN[0001] Default config apiVersion is 'k3d.io/v1alpha4', but you're using 'k3d.io/v1alpha3': consider migrating.
DEBU[0001] Migrating v1alpha3 to v1alpha4
DEBU[0001] Migrated config: {TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0004a0210 Config:mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
} HostAliases:[]}
DEBU[0001] JSON Schema Validation Result: &{errors:[] score:100}
DEBU[0001] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0004a0210 Config:mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
} HostAliases:[]}
==========================
DEBU[0001] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:39337} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0004a0210 Config:mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
} HostAliases:[]}
==========================
DEBU[0001] generated loadbalancer config:
ports:
6443.tcp:
- k3d-MB-server-0
settings:
workerConnections: 1024
DEBU[0001] Found multiline registries config embedded in SimpleConfig:
mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
DEBU[0001] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:MB Network:{Name:k3d-MB ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc0003d64e0 0xc0003d69c0 0xc0003d6b60 0xc0003d6d00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000296c40 ServerLoadBalancer:0xc0001e0df0 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:0xc000459e10 Use:[] Config:0xc00048c8d0}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0001] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0001] Prep: Network
INFO[0001] Created network 'k3d-MB'
INFO[0001] Created image volume k3d-MB-images
INFO[0001] Creating node 'MB'
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock
DEBU[0001] Detected CgroupV2, enabling custom entrypoint (disable by setting K3D_FIX_CGROUPV2=false)
WARN[0001] Failed to get network information: Error: No such network: bridge
ERRO[0001] Failed Cluster Preparation: Failed to create registry: failed to create registry node 'MB': runtime failed to create node 'MB': failed to create container for node 'MB': docker failed to create container 'MB': Error response from daemon: container create: unable to find network configuration for bridge: network not found
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'MB'
ERRO[0001] failed to get cluster: No nodes found for given cluster
FATA[0001] Cluster creation FAILED, also FAILED to rollback changes!
Running podman network create bridge
then allows us to progress further:
❯ k3d cluster create --verbose -c k3d.yaml
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:3.4.4 OSType:linux OS:arch Arch:amd64 CgroupVersion:2 CgroupDriver:systemd Filesystem:extfs}
DEBU[0000] Additional CLI Configuration:
cli:
api-port: ""
env: []
k3s-node-labels: []
k3sargs: []
ports: []
registries:
create: ""
runtime-labels: []
volumes: []
hostaliases: []
DEBU[0000] Validating file /tmp/k3d-config-tmp-k3d.yaml3044338603 against default JSONSchema...
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:46}
INFO[0000] Using config file k3d.yaml (k3d.io/v1alpha3#simple)
DEBU[0000] Configuration:
agents: 2
apiversion: k3d.io/v1alpha3
image: docker.io/rancher/k3s:v1.22.6-k3s1
kind: Simple
name: MB
network: ""
options:
k3d:
disableimagevolume: false
disableloadbalancer: false
disablerollback: false
loadbalancer:
configoverrides: []
timeout: 0s
wait: true
kubeconfig:
switchcurrentcontext: true
updatedefaultkubeconfig: true
runtime:
agentsmemory: ""
gpurequest: ""
hostpidmode: false
serversmemory: ""
registries:
config: |
mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
create:
hostport: "5000"
name: MB
use: []
servers: 1
subnet: ""
token: ""
WARN[0000] Default config apiVersion is 'k3d.io/v1alpha4', but you're using 'k3d.io/v1alpha3': consider migrating.
DEBU[0000] Migrating v1alpha3 to v1alpha4
DEBU[0000] Migrated config: {TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0002942a0 Config:mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
} HostAliases:[]}
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:100}
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0002942a0 Config:mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
} HostAliases:[]}
==========================
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:41963} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0002942a0 Config:mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
} HostAliases:[]}
==========================
DEBU[0000] generated loadbalancer config:
ports:
6443.tcp:
- k3d-MB-server-0
settings:
workerConnections: 1024
DEBU[0000] Found multiline registries config embedded in SimpleConfig:
mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:MB Network:{Name:k3d-MB ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc0005824e0 0xc0005829c0 0xc000582b60 0xc000582d00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc00071c7c0 ServerLoadBalancer:0xc0002e74e0 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:0xc000281e10 Use:[] Config:0xc00028e690}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0000] Prep: Network
DEBU[0000] Found network {Name:k3d-MB ID:8633a6bcaf70a010f6ad739f9e32cfa9cd751630215e818f2101f97f30914412 Created:2022-02-24 14:10:47.224368561 +0000 UTC Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[] Config:[{Subnet:10.88.2.0/24 IPRange: Gateway:10.88.2.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[] Labels:map[app:k3d] Peers:[] Services:map[]}
INFO[0000] Re-using existing network 'k3d-MB' (8633a6bcaf70a010f6ad739f9e32cfa9cd751630215e818f2101f97f30914412)
INFO[0000] Created image volume k3d-MB-images
INFO[0000] Creating node 'MB'
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock
DEBU[0001] Detected CgroupV2, enabling custom entrypoint (disable by setting K3D_FIX_CGROUPV2=false)
DEBU[0001] Created container MB (ID: 875cbc9340e268ffb682867eb97bbb874316b048e7202fc83123292b5de12249)
INFO[0001] Successfully created registry 'MB'
DEBU[0001] no netlabel present on container /MB
DEBU[0001] failed to get IP for container /MB as we couldn't find the cluster network
DEBU[0001] no netlabel present on container /MB
DEBU[0001] failed to get IP for container /MB as we couldn't find the cluster network
DEBU[0001] [Docker] DockerHost: 'unix:///run/user/1000/podman/podman.sock' (unix:///run/user/1000/podman/podman.sock)
INFO[0001] Starting new tools node...
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock
DEBU[0001] Created container k3d-MB-tools (ID: 26b261cc963636e5e8d3563ea37f844b73c19a91f4c88f5d540e4d9c91b1aadd)
DEBU[0001] Node k3d-MB-tools Start Time: 2022-02-24 14:11:16.048582396 +0000 GMT m=+1.247392738
INFO[0001] Starting Node 'k3d-MB-tools'
DEBU[0001] Truncated 2022-02-24 14:11:16.244614705 +0000 UTC to 2022-02-24 14:11:16 +0000 UTC
INFO[0002] Creating node 'k3d-MB-server-0'
DEBU[0002] Created container k3d-MB-server-0 (ID: 956e13dac76be6fe6c77f1d880c897a5ba79c3944f09799c6b6059c6d0bbcc99)
DEBU[0002] Created node 'k3d-MB-server-0'
INFO[0002] Creating node 'k3d-MB-agent-0'
DEBU[0002] Created container k3d-MB-agent-0 (ID: 2cec6c0ca8bdb1683fe693b1b564fa2db74c7513adedecf9d6d71681090bb611)
DEBU[0002] Created node 'k3d-MB-agent-0'
INFO[0002] Creating node 'k3d-MB-agent-1'
DEBU[0002] Created container k3d-MB-agent-1 (ID: 9d7e5c3c1ce2f94e61c9f3c9b1a335f20b4b78b01e3115ee1fdd32e7d78d9af3)
DEBU[0002] Created node 'k3d-MB-agent-1'
INFO[0002] Creating LoadBalancer 'k3d-MB-serverlb'
DEBU[0002] Created container k3d-MB-serverlb (ID: 050333ef064bffd2aa51b42830dad49925572362f54400e1a3c06562e8b1f2e1)
DEBU[0002] Created loadbalancer 'k3d-MB-serverlb'
DEBU[0002] DOCKER_SOCK=/var/run/docker.sock
INFO[0002] Using the k3d-tools node to gather environment information
DEBU[0002] no netlabel present on container /k3d-MB-tools
DEBU[0002] failed to get IP for container /k3d-MB-tools as we couldn't find the cluster network
DEBU[0003] DOCKER_SOCK=/var/run/docker.sock
INFO[0003] HostIP: using network gateway 10.88.2.1 address
INFO[0003] Starting cluster 'MB'
INFO[0003] Starting servers...
DEBU[0003] >>> enabling cgroupsv2 magic
DEBU[0003] Node k3d-MB-server-0 Start Time: 2022-02-24 14:11:17.856348623 +0000 GMT m=+3.055158994
DEBU[0003] Deleting node k3d-MB-tools ...
INFO[0003] Starting Node 'k3d-MB-server-0'
DEBU[0004] Truncated 2022-02-24 14:11:18.856413342 +0000 UTC to 2022-02-24 14:11:18 +0000 UTC
DEBU[0004] Waiting for node k3d-MB-server-0 to get ready (Log: 'k3s is up and running')
WARN[0018] warning: encountered fatal log from node k3d-MB-server-0 (retrying 0/10): Mtime="2022-02-24T14:11:32Z" level=fatal msg="failed to find cpu cgroup (v2)"
ERRO[0018] Failed Cluster Start: Failed to start server k3d-MB-server-0: Node k3d-MB-server-0 failed to get ready: Failed waiting for log message 'k3s is up and running' from node 'k3d-MB-server-0': node 'k3d-MB-server-0' (container '956e13dac76be6fe6c77f1d880c897a5ba79c3944f09799c6b6059c6d0bbcc99') not running
ERRO[0018] Failed to create cluster >>> Rolling Back
INFO[0018] Deleting cluster 'MB'
DEBU[0018] no netlabel present on container /MB
DEBU[0018] failed to get IP for container /MB as we couldn't find the cluster network
DEBU[0018] Cluster Details: &{Name:MB Network:{Name:k3d-MB ID:8633a6bcaf70a010f6ad739f9e32cfa9cd751630215e818f2101f97f30914412 External:true IPAM:{IPPrefix:10.88.2.0/24 IPsUsed:[10.88.2.1] Managed:false} Members:[]} Token:ABmfBwcdGuaRlXkoYaTv Nodes:[0xc0005824e0 0xc0005829c0 0xc000582b60 0xc000582d00 0xc000583860] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc00071c7c0 ServerLoadBalancer:0xc0002e74e0 ImageVolume:k3d-MB-images Volumes:[k3d-MB-images k3d-MB-images]}
DEBU[0018] Deleting node k3d-MB-serverlb ...
DEBU[0018] Deleting node k3d-MB-server-0 ...
DEBU[0019] Deleting node k3d-MB-agent-0 ...
DEBU[0019] Deleting node k3d-MB-agent-1 ...
DEBU[0019] Deleting node MB ...
DEBU[0019] Skip deletion of cluster network 'k3d-MB' because it's managed externally
INFO[0019] Deleting 2 attached volumes...
DEBU[0019] Deleting volume k3d-MB-images...
DEBU[0019] Deleting volume k3d-MB-images...
WARN[0019] Failed to delete volume 'k3d-MB-images' of cluster 'failed to find volume 'k3d-MB-images': Error: No such volume: k3d-MB-images': MB -> Try to delete it manually
FATA[0019] Cluster creation FAILED, all changes have been rolled back!
@johnhamelink Wow, thank you for the detailed write-up! Unfortunately, I have already done all of these, and in fact I can even assign a static IP to rootless containers directly:
λ › podman network create foo
foo
λ › podman network inspect foo
[
{
"name": "foo",
"id": "2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae",
"driver": "bridge",
"network_interface": "cni-podman1",
"created": "2022-02-24T11:44:08.064526708-08:00",
"subnets": [
{
"subnet": "10.89.0.0/24",
"gateway": "10.89.0.1"
}
],
"ipv6_enabled": false,
"internal": false,
"dns_enabled": true,
"ipam_options": {
"driver": "host-local"
}
}
]
λ › podman run --rm -it --network=foo --ip=10.89.0.5 docker.io/library/alpine:latest ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether ba:40:14:29:be:63 brd ff:ff:ff:ff:ff:ff
inet 10.89.0.5/24 brd 10.89.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::b840:14ff:fe29:be63/64 scope link tentative
valid_lft forever preferred_lft forever
However, I am still failing creation immediately at the beginning:
λ › k3d cluster create --config ~/.kube/k3d.yaml
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:4.0.0-dev OSType:linux OS:arch Arch:amd64 CgroupVersion:2 CgroupDriver:systemd Filesystem:zfs}
DEBU[0000] Additional CLI Configuration:
cli:
api-port: ""
env: []
k3s-node-labels: []
k3sargs: []
ports: []
registries:
create: ""
runtime-labels: []
volumes: []
hostaliases: []
DEBU[0000] Validating file /tmp/k3d-config-tmp-k3d.yaml2080189530 against default JSONSchema...
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:73}
INFO[0000] Using config file /home/jerry/.kube/k3d.yaml (k3d.io/v1alpha4#simple)
[truncated]
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:k3s-default Network:{Name:k3d-k3s-default ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc000405a00 0xc000405ba0] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000119880 ServerLoadBalancer:0xc00029aa20 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:<nil> Use:[] Config:0xc0002a80c0}} KubeconfigOpts:{UpdateDefaultKubeconfig:false SwitchCurrentContext:true}}
===== ===== =====
INFO[0000] Prep: Network
DEBU[0000] Found network {Name:k3d-k3s-default ID:89a5dde53e7c97671dfc4c2ede2d906feeac60b2bad51490f5683f379b649776 Created:0001-01-01 00:00:00 +0000 UTC Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[driver:host-local] Config:[{Subnet:10.89.0.0/24 IPRange: Gateway:10.89.0.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[] Labels:map[app:k3d] Peers:[] Services:map[]}
INFO[0000] Re-using existing network 'k3d-k3s-default' (89a5dde53e7c97671dfc4c2ede2d906feeac60b2bad51490f5683f379b649776)
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Detected CgroupV2, enabling custom entrypoint (disable by setting K3D_FIX_CGROUPV2=false)
ERRO[0000] Failed to run tools container for cluster 'k3s-default'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
ERRO[0001] Failed Cluster Creation: failed setup of server/agent node k3d-k3s-default-server-0: failed to create node: runtime failed to create node 'k3d-k3s-default-server-0': failed to create container for node 'k3d-k3s-default-server-0': docker failed to create container 'k3d-k3s-default-server-0': Error response from daemon: container create: invalid config provided: Networks and static ip/mac address can only be used with Bridge mode networking
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'k3s-default'
ERRO[0001] failed to get cluster: No nodes found for given cluster
Even though the network k3d
created is in bridge mode and I can create a static IP container on it manually:
λ › podman network inspect k3d-k3s-default
[
{
"name": "k3d-k3s-default",
"id": "89a5dde53e7c97671dfc4c2ede2d906feeac60b2bad51490f5683f379b649776",
"driver": "bridge",
"network_interface": "cni-podman1",
"created": "2022-02-24T11:55:39.831735268-08:00",
"subnets": [
{
"subnet": "10.89.0.0/24",
"gateway": "10.89.0.1"
}
],
"ipv6_enabled": false,
"internal": false,
"dns_enabled": true,
"labels": {
"app": "k3d"
},
"ipam_options": {
"driver": "host-local"
}
}
]
λ › podman run --rm -it --network=k3d-k3s-default --ip=10.89.1.5 docker.io/library/alpine:latest ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether ca:a4:24:a7:c8:f9 brd ff:ff:ff:ff:ff:ff
inet 10.89.0.5/24 brd 10.89.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::c8a4:24ff:fea7:c8f9/64 scope link tentative
valid_lft forever preferred_lft forever
This looks specific to my machine since rootless podman appears to get past this point for everyone else, so I'll work on my end to figure it out — don't want to turn the issue thread into a troubleshooting session.
So after enabling cgroup v1 by setting the systemd.unified_cgroup_hierarchy=0
kernel parameter, k3d fails like so:
ERRO[0002] failed to gather environment information used for cluster creation: failed to run k3d-tools node for cluster 'MB': failed to create node 'k3d-MB-tools': runtime failed to create node 'k3d-MB-tools': failed to create container for node 'k3d-MB-tools': docker failed to create container 'k3d-MB-tools': Error response from daemon: container create: statfs /var/run/docker.sock: permission denied
After running podman system service --time=0 unix:///var/run/docker.sock
and trying again, k3d succesfully registers a server, but then hangs while waiting for an agent to come up:
❯ k3d cluster create --verbose -c k3d.yaml
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:3.4.4 OSType:linux OS:arch Arch:amd64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs}
DEBU[0000] Additional CLI Configuration:
cli:
api-port: ""
env: []
k3s-node-labels: []
k3sargs: []
ports: []
registries:
create: ""
runtime-labels: []
volumes: []
hostaliases: []
DEBU[0000] Validating file /tmp/k3d-config-tmp-k3d.yaml841035332 against default JSONSchema...
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:54}
INFO[0000] Using config file k3d.yaml (k3d.io/v1alpha4#simple)
DEBU[0000] Configuration:
agents: 2
apiversion: k3d.io/v1alpha4
image: docker.io/rancher/k3s:v1.22.6-k3s1
kind: Simple
metadata:
name: MB
network: bridge
options:
k3d:
disableimagevolume: false
disableloadbalancer: false
disablerollback: false
loadbalancer:
configoverrides: []
timeout: 0s
wait: true
kubeconfig:
switchcurrentcontext: true
updatedefaultkubeconfig: true
runtime:
agentsmemory: ""
gpurequest: ""
hostpidmode: false
serversmemory: ""
registries:
config: |
mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
create:
hostport: "5000"
name: MB
use: []
servers: 1
subnet: ""
token: ""
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network:bridge Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc00029dda0 Config:mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
} HostAliases:[]}
==========================
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:46195} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network:bridge Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc00029dda0 Config:mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
} HostAliases:[]}
==========================
DEBU[0000] generated loadbalancer config:
ports:
6443.tcp:
- k3d-MB-server-0
settings:
workerConnections: 1024
DEBU[0000] Found multiline registries config embedded in SimpleConfig:
mirrors:
"k3d-registry":
endpoint:
- "http://k3d-registry.localhost:5000"
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:MB Network:{Name:bridge ID: External:true IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc0000cd6c0 0xc0000cd860 0xc0000cda00 0xc0000cdba0] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc00013ce80 ServerLoadBalancer:0xc0002fa3b0 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:0xc0003a15f0 Use:[] Config:0xc00025a8a0}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0000] Prep: Network
DEBU[0000] Found network {Name:bridge ID:17f29b073143d8cd97b5bbe492bdeffec1c5fee55cc1fe2112c8b9335f8b6121 Created:2022-02-24 14:11:13.113752904 +0000 UTC Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[] Config:[{Subnet:10.88.3.0/24 IPRange: Gateway:10.88.3.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[] Labels:map[] Peers:[] Services:map[]}
INFO[0000] Re-using existing network 'bridge' (17f29b073143d8cd97b5bbe492bdeffec1c5fee55cc1fe2112c8b9335f8b6121)
INFO[0000] Created image volume k3d-MB-images
INFO[0000] Creating node 'MB'
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Created container MB (ID: 9ff08854c055b508207c902c631a4b38e459ee77f2365d2de518997a1f315987)
INFO[0000] Successfully created registry 'MB'
DEBU[0000] no netlabel present on container /MB
DEBU[0000] failed to get IP for container /MB as we couldn't find the cluster network
DEBU[0000] no netlabel present on container /MB
DEBU[0000] failed to get IP for container /MB as we couldn't find the cluster network
INFO[0000] Container 'MB' is already connected to 'bridge'
DEBU[0000] [Docker] DockerHost: 'unix:///run/user/1000/podman/podman.sock' (unix:///run/user/1000/podman/podman.sock)
INFO[0000] Starting new tools node...
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Created container k3d-MB-tools (ID: e9d3b91904ed263dabc6eff2fbfda6661d9011ca9f5810093bf4e5e5754a38e9)
DEBU[0000] Node k3d-MB-tools Start Time: 2022-02-25 10:25:41.091430337 +0000 GMT m=+0.917906869
INFO[0000] Starting Node 'k3d-MB-tools'
DEBU[0001] Truncated 2022-02-25 10:25:41.312800917 +0000 UTC to 2022-02-25 10:25:41 +0000 UTC
INFO[0001] Creating node 'k3d-MB-server-0'
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock
DEBU[0001] Created container k3d-MB-server-0 (ID: 8de84df0fe2acb98bd404920a4b06898eea85504e975dcd29b041839f1aca81a)
DEBU[0001] Created node 'k3d-MB-server-0'
INFO[0001] Creating node 'k3d-MB-agent-0'
DEBU[0002] DOCKER_SOCK=/var/run/docker.sock
DEBU[0002] Created container k3d-MB-agent-0 (ID: bfb83a0f63dacd9d190cf2f20751d3b7d68ec713bfab2ee7b990b5b6073171a2)
DEBU[0002] Created node 'k3d-MB-agent-0'
INFO[0002] Creating node 'k3d-MB-agent-1'
DEBU[0002] DOCKER_SOCK=/var/run/docker.sock
DEBU[0002] Created container k3d-MB-agent-1 (ID: e995d2ae2272f56d6168c10c564b4d252732c110f38d54fba0ef9396ce8230f6)
DEBU[0002] Created node 'k3d-MB-agent-1'
INFO[0002] Creating LoadBalancer 'k3d-MB-serverlb'
DEBU[0002] DOCKER_SOCK=/var/run/docker.sock
DEBU[0002] Created container k3d-MB-serverlb (ID: ac4b9080126704e029cf38398623b3c445bec3b83404edf89bd9f55a1009f604)
DEBU[0002] Created loadbalancer 'k3d-MB-serverlb'
DEBU[0002] DOCKER_SOCK=/var/run/docker.sock
INFO[0002] Using the k3d-tools node to gather environment information
DEBU[0002] no netlabel present on container /k3d-MB-tools
DEBU[0002] failed to get IP for container /k3d-MB-tools as we couldn't find the cluster network
DEBU[0003] DOCKER_SOCK=/var/run/docker.sock
INFO[0003] HostIP: using network gateway 10.88.3.1 address
INFO[0003] Starting cluster 'MB'
INFO[0003] Starting servers...
DEBU[0003] Deleting node k3d-MB-tools ...
DEBU[0003] DOCKER_SOCK=/var/run/docker.sock
DEBU[0003] No fix enabled.
DEBU[0003] Node k3d-MB-server-0 Start Time: 2022-02-25 10:25:43.629199131 +0000 GMT m=+3.455675648
INFO[0003] Starting Node 'k3d-MB-server-0'
DEBU[0003] Truncated 2022-02-25 10:25:44.068160949 +0000 UTC to 2022-02-25 10:25:44 +0000 UTC
DEBU[0003] Waiting for node k3d-MB-server-0 to get ready (Log: 'k3s is up and running')
DEBU[0008] Finished waiting for log message 'k3s is up and running' from node 'k3d-MB-server-0'
INFO[0008] Starting agents...
DEBU[0008] DOCKER_SOCK=/var/run/docker.sock
DEBU[0008] No fix enabled.
DEBU[0008] Node k3d-MB-agent-1 Start Time: 2022-02-25 10:25:49.003795179 +0000 GMT m=+8.830271747
DEBU[0008] DOCKER_SOCK=/var/run/docker.sock
DEBU[0008] No fix enabled.
DEBU[0008] Node k3d-MB-agent-0 Start Time: 2022-02-25 10:25:49.016064825 +0000 GMT m=+8.842541386
INFO[0009] Starting Node 'k3d-MB-agent-1'
INFO[0009] Starting Node 'k3d-MB-agent-0'
DEBU[0009] Truncated 2022-02-25 10:25:49.304455169 +0000 UTC to 2022-02-25 10:25:49 +0000 UTC
DEBU[0009] Waiting for node k3d-MB-agent-1 to get ready (Log: 'Successfully registered node')
DEBU[0009] Truncated 2022-02-25 10:25:49.401069603 +0000 UTC to 2022-02-25 10:25:49 +0000 UTC
DEBU[0009] Waiting for node k3d-MB-agent-0 to get ready (Log: 'Successfully registered node')
Running podman logs
on an agent shows a stream of the following error:
time="2022-02-25T10:29:19Z" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:44918->127.0.0.1:6444: read: connection reset by peer"
@geraldwuhoo You're hitting the error I attempted to fix in #986, try applying that patch.
@johnhamelink Try using Podman v4
Ensure the Podman system socket is available:
sudo systemctl enable --now podman.socket
# or sudo podman system service --time=0
To point k3d at the right Docker socket, create a symbolic link:
ln -s /run/podman/podman.sock /var/run/docker.sock
# or install your system podman-docker if available
sudo k3d cluster create
Make a fake system-wide Docker socket (for now):
sudo touch /var/run/docker.sock
sudo chmod a+rw /var/run/docker.sock
Ensure the Podman user socket is available:
systemctl --user enable --now podman.socket
# or podman system service --time=0
Set DOCKER_HOST when running k3d:
XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR:-/run/user/$(id -u)}
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
k3d cluster create
@serverwentdown I had a go at your instructions above, but I'm still having issues with podman-rootless and bridge networking after installing podman-git
, podman-docker-git
and building k3d from https://github.com/k3d-io/k3d/pull/986:
❯ systemctl --user start podman.socket
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
bin/k3d cluster create --verbose
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:4.0.0-dev OSType:linux OS:arch Arch:amd64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs}
DEBU[0000] Additional CLI Configuration:
cli:
api-port: ""
env: []
k3s-node-labels: []
k3sargs: []
ports: []
registries:
create: ""
runtime-labels: []
volumes: []
hostaliases: []
DEBU[0000] Configuration:
agents: 0
image: docker.io/rancher/k3s:v1.22.6-k3s1
network: ""
options:
k3d:
disableimagevolume: false
disableloadbalancer: false
disablerollback: false
loadbalancer:
configoverrides: []
timeout: 0s
wait: true
kubeconfig:
switchcurrentcontext: true
updatedefaultkubeconfig: true
runtime:
agentsmemory: ""
gpurequest: ""
hostpidmode: false
serversmemory: ""
registries:
config: ""
use: []
servers: 1
subnet: ""
token: ""
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
==========================
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:39181} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
==========================
DEBU[0000] generated loadbalancer config:
ports:
6443.tcp:
- k3d-k3s-default-server-0
settings:
workerConnections: 1024
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:k3s-default Network:{Name:k3d-k3s-default ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc0005036c0 0xc000503860] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc00041cd40 ServerLoadBalancer:0xc000426890 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created image volume k3d-k3s-default-images
DEBU[0000] [Docker] DockerHost: 'unix:///run/user/1000/podman/podman.sock' (unix:///run/user/1000/podman/podman.sock)
INFO[0000] Starting new tools node...
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
ERRO[0000] Failed to run tools container for cluster 'k3s-default'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock
ERRO[0001] Failed Cluster Creation: failed setup of server/agent node k3d-k3s-default-server-0: failed to create node: runtime failed to create node 'k3d-k3s-default-server-0': failed to create container for node 'k3d-k3s-default-server-0': docker failed to create container 'k3d-k3s-default-server-0': Error response from daemon: container create: invalid config provided: Networks and static ip/mac address can only be used with Bridge mode networking
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'k3s-default'
ERRO[0001] failed to get cluster: No nodes found for given cluster
FATA[0001] Cluster creation FAILED, also FAILED to rollback changes!
❯ podman --version
podman version 4.0.0-dev
❯ bin/k3d --version
k3d version v5.1.0-74-gdd07011f
k3s version v1.22.6-k3s1 (default)
❯ podman network ls
NETWORK ID NAME DRIVER
89a5dde53e7c k3d-k3s-default bridge
2f259bab93aa podman bridge
❯ podman network inspect k3d-k3s-default
[
{
"name": "k3d-k3s-default",
"id": "89a5dde53e7c97671dfc4c2ede2d906feeac60b2bad51490f5683f379b649776",
"driver": "bridge",
"network_interface": "cni-podman1",
"created": "2022-03-01T17:29:49.104065781Z",
"subnets": [
{
"subnet": "10.89.0.0/24",
"gateway": "10.89.0.1"
}
],
"ipv6_enabled": false,
"internal": false,
"dns_enabled": false,
"labels": {
"app": "k3d"
},
"ipam_options": {
"driver": "host-local"
}
}
]
There's still one more thing I need to check out:
$ sudo Downloads/k3d-linux-amd64 cluster create
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
ERRO[0000] Failed to run tools container for cluster 'k3s-default'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] Starting new tools node...
ERRO[0001] Failed to run tools container for cluster 'k3s-default'
ERRO[0001] failed to gather environment information used for cluster creation: failed to run k3d-tools node for cluster 'k3s-default': failed to create node 'k3d-k3s-default-tools': runtime failed to create node 'k3d-k3s-default-tools': failed to create container for node 'k3d-k3s-default-tools': docker failed to pull image 'docker.io/rancher/k3d-tools:5.3.0': docker failed to pull the image 'docker.io/rancher/k3d-tools:5.3.0': Error response from daemon: failed to resolve image name: short-name resolution enforced but cannot prompt without a TTY
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'k3s-default'
INFO[0001] Deleting cluster network 'k3d-k3s-default'
INFO[0001] Deleting 2 attached volumes...
WARN[0001] Failed to delete volume 'k3d-k3s-default-images' of cluster 'failed to find volume 'k3d-k3s-default-images': Error: No such volume: k3d-k3s-default-images': k3s-default -> Try to delete it manually
FATA[0001] Cluster creation FAILED, all changes have been rolled back!
That Error response from daemon: failed to resolve image name: short-name resolution enforced but cannot prompt without a TTY
sure was unexpected.
@jiridanek Which version of k3d and Podman are you using? It'd help me narrow down the cause. Anyway, you can find a solution on this blog post: https://www.redhat.com/sysadmin/container-image-short-names
@serverwentdown
[jdanek@fedora ~]$ Downloads/k3d-linux-amd64 --version
k3d version v5.3.0
k3s version v1.22.6-k3s1 (default)
[jdanek@fedora ~]$ podman --version
podman version 3.4.4
@serverwentdown After upgrading to the latest k3d, which reports k3d version v5.4.1; k3s version v1.22.7-k3s1 (default)
, the problem went away, and I got a different failure instead
[jdanek@fedora ~]$ sudo Downloads/k3d-linux-amd64 cluster create
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
INFO[0000] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.1'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Pulling image 'docker.io/rancher/k3s:v1.22.7-k3s1'
INFO[0012] Starting Node 'k3d-k3s-default-tools'
INFO[0026] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0026] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.1'
INFO[0034] Using the k3d-tools node to gather environment information
INFO[0035] HostIP: using network gateway 10.89.1.1 address
INFO[0035] Starting cluster 'k3s-default'
INFO[0035] Starting servers...
INFO[0035] Starting Node 'k3d-k3s-default-server-0'
INFO[0039] All agents already running.
INFO[0039] Starting helpers...
INFO[0039] Starting Node 'k3d-k3s-default-serverlb'
ERRO[0047] Failed Cluster Start: error during post-start cluster preparation: failed to get cluster network k3d-k3s-default to inject host records into CoreDNS: failed to parse IP of container k3d-k3s-default: netaddr.ParseIPPrefix("10.89.1.4"): no '/'
ERRO[0047] Failed to create cluster >>> Rolling Back
INFO[0047] Deleting cluster 'k3s-default'
INFO[0047] Deleting cluster network 'k3d-k3s-default'
INFO[0047] Deleting 2 attached volumes...
WARN[0047] Failed to delete volume 'k3d-k3s-default-images' of cluster 'k3s-default': failed to find volume 'k3d-k3s-default-images': Error: No such volume: k3d-k3s-default-images -> Try to delete it manually
FATA[0047] Cluster creation FAILED, all changes have been rolled back!
I am facing the same issue (with the same version of k3d/k3s). Let me know if I can provide anything else which might be helpful.
Podman is a Docker drop-in alternative https://podman.io/ and it fixed some architecture issues that Docker has, e.g., no daemon, rootless.
More info: https://developers.redhat.com/articles/podman-next-generation-linux-container-tools/