Closed outbackdingo closed 2 years ago
I have written this up in detail in the past, I just don't know where it landed. I don't have access to those old write ups, but will start a dump of info here, and then I can get it into the READMEs. Here's some bare bones information:
Leaving the bootloader steps out of the way for now, the heart of the boot starts after control has been handed to cube-essential. That hand off can be via bootloader -> initramfs/initrd, or whatever the target dictates.
Essential only has one container runtime: pflask. Which is super tiny and basic, and has enough functionality to launch oci containers that complete the rest of the boot process. Essential is hardly ever upgraded and should be considered like firmware, hence the small size, low attack surface and minimal services.
Essential has a basic service that starts those core pflask container(s). One of which is dom0 the other is typically the vrf. Those are system containers and supply the services for the finalization of the boot.
VRF does the routing, and dom0 has the runc container runtime (which it grafts to essential for some namespace magic). Dom0 has a list of containers to start and a service that starts them (via runc).
What is started by which container is a property of how they are installed. The cube-installer sets this for the system/default containers and the others are set by the user when they are installed (see the "autostart" cube-cfg/cube-ctl calls). What devices are available to a container, what privileges it has, what it starts, etc, are all properties of the container itself or the configuration as set when the container was installed.
ok, great... better understanding now. as for essential, so being "firmware" there really isnt a need for it to get a tty is there, to prevent "local" access in a deployment scenerio .... or move it to a later tty so its not visible
essential having a tty can be configured (but I don't recall it being changed recently), and it has been argued both ways if it should or shouldn't have one by default. The active tty/vt is presented normally as dom0 or cube-server/desktop to give the user the feeling that the plumbing of the system doesn't need to be used for day to day activities.
ok to continue this "thread" ... Dom0 has a list of containers to start and a service that starts them (via runc). .... where is this "list" located, and does it include specified parameters / privs per container?
objective: deploy 2-3 or 4 cube-k8s nodes with different hostnames on a single overc server.... 1 master and 3 workers..... i can bring up k8s one cube-k8s no worries, and install weave, so far so good. ssh into cube-k8s run: mount -t devtmpfs none /dev mount devpts /dev/pts -t devpts rm /opt/cni/bin cp -a /usr/libexec/cni/ /opt/cni/bin vi /etc/hosts (add 192.168.1.42 cube-k8s) swapoff -a kubeadm init --cri-socket /var/run/dockershim.sock --ignore-preflight-errors=ALL
results in W0809 16:12:40.581320 919 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.6 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: missing optional cgroups: hugetlb [WARNING KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: "1.19.0-rc.3.31+bdc575e10c35a3-dirty" Control plane version: "1.18.6" [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [cube-k8s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.124] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [cube-k8s localhost] and IPs [192.168.122.124 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [cube-k8s localhost] and IPs [192.168.122.124 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 29.503100 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node cube-k8s as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node cube-k8s as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: gv4gre.ymeijp4bcrza2e2l [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.122.124:6443 --token gv4gre.ymeijp4bcrza2e2l \ --discovery-token-ca-cert-hash sha256:6a79f91c3a7e85a3532ad40301a7e25cbaf7fdbcb8483085f1a3078bc1d18e00
results as expected on cube-k8s
kubectl get node NAME STATUS ROLES AGE VERSION cube-k8s NotReady master 116s v1.19.0-rc.3.31+bdc575e10c35a3-dirty
root@cube-k8s:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6f5c7bbdfb-5bs8w 0/1 Pending 0 118s kube-system coredns-6f5c7bbdfb-svdr5 0/1 Pending 0 118s kube-system etcd-cube-k8s 1/1 Running 0 119s kube-system kube-apiserver-cube-k8s 1/1 Running 0 119s kube-system kube-controller-manager-cube-k8s 1/1 Running 0 119s kube-system kube-proxy-bs424 1/1 Running 0 118s kube-system kube-scheduler-cube-k8s 1/1 Running 0 119s
install weave ..... kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
results as expected ...
root@cube-k8s:~# kubectl get node NAME STATUS ROLES AGE VERSION cube-k8s Ready master 5m29s v1.19.0-rc.3.31+bdc575e10c35a3-dirty
root@cube-k8s:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6f5c7bbdfb-5bs8w 1/1 Running 0 5m20s kube-system coredns-6f5c7bbdfb-svdr5 1/1 Running 0 5m20s kube-system etcd-cube-k8s 1/1 Running 0 5m21s kube-system kube-apiserver-cube-k8s 1/1 Running 0 5m21s kube-system kube-controller-manager-cube-k8s 1/1 Running 0 5m21s kube-system kube-proxy-bs424 1/1 Running 0 5m20s kube-system kube-scheduler-cube-k8s 1/1 Running 0 5m21s kube-system weave-net-j6p62 2/2 Running 0 95s
then scp cube-k8s-node-genericx86-64-20200809085758.rootfs.tar.bz2
scp build/tmp/deploy/images/genericx86-64/cube-k8s-node-genericx86-64-20200809085758.rootfs.tar.bz2 root@192.168.122.124:/root/ root@192.168.122.124's password: cube-k8s-node-genericx86-64-20200809085758.rootfs.tar.bz2 100% 741MB 616.1MB/s 00:01
then to dom0 root@cube-k8s:~# scp cube-k8s-node-genericx86-64-20200809085758.rootfs.tar.bz2 root@192.168.42.3:/root/ root@192.168.42.3's password: cube-k8s-node-genericx86-64-20200809085758.rootfs.tar.bz2 100% 741MB 391.1MB/s 00:01
root@cube-k8s:~# ssh root@192.168.42.3 root@192.168.42.3's password: Last login: Sun Aug 9 15:54:13 2020 from 192.168.42.1
root@cube-dom0:~# cube-ctl add -n cube-node1 cube-k8s-node-genericx86-64-20200809085758.rootfs.tar.bz2 [INFO] Installing container cube-node1 to /opt/container//cube-node1 [INFO] Extracting rootfs..... [INFO] Succeeded [INFO] Performing OCI configuration ...
root@cube-dom0:~# cube-ctl status name type status attributes addresses
cube-builder runc running -- 192.168.42.240 cube-desktop runc running -- 192.168.42.33 cube-k8s runc running netprime 192.168.42.1,192.168.122.124 cube-node1 runc available -- -- cube-server runc running -- 192.168.42.178 cube-vrf pflask running vrf 192.168.42.4 dom0 pflask running -- 192.168.42.3
we now have cube-node1 available, so as cube-k8s is set for cube.container.system we do the same for cube-node1
cube-cfg -n cube-node1 set cube.container.system:true
then
root@cube-dom0:~# cube-ctl start cube-node1 root@cube-dom0:~# cube-ctl status name type status attributes addresses
cube-builder runc running -- 192.168.42.240 cube-desktop runc running -- 192.168.42.33 cube-k8s runc running netprime 192.168.42.1,192.168.122.124 cube-node1 runc running dirty 192.168.42.133 cube-server runc running -- 192.168.42.178 cube-vrf pflask running vrf 192.168.42.4 dom0 pflask running -- 192.168.42.3
ssh into cube-node1 run mount -t devtmpfs none /dev mount devpts /dev/pts -t devpts rm /opt/cni/bin cp -a /usr/libexec/cni/ /opt/cni/bin swapoff -a edit /etc/hosts (add 192.168.42.1 cube-k8s, and 192.168.42.xxx cube-node1)
then run
kubeadm join 192.168.122.124:6443 --token gv4gre.ymeijp4bcrza2e2l \
--discovery-token-ca-cert-hash sha256:6a79f91c3a7e85a3532ad40301a7e25cbaf7fdbcb8483085f1a3078bc1d18e00 --cri-socket /var/run/dockershim.sock
[preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: missing optional cgroups: hugetlb [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
on cube-k8s
kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6f5c7bbdfb-5bs8w 1/1 Running 0 26m kube-system coredns-6f5c7bbdfb-svdr5 1/1 Running 0 26m kube-system etcd-cube-k8s 1/1 Running 0 26m kube-system kube-apiserver-cube-k8s 1/1 Running 0 26m kube-system kube-controller-manager-cube-k8s 1/1 Running 0 26m kube-system kube-proxy-2vg6r 0/1 RunContainerError 1 69s kube-system kube-proxy-bs424 1/1 Running 0 26m kube-system kube-scheduler-cube-k8s 1/1 Running 0 26m kube-system weave-net-5986m 0/2 RunContainerError 2 69s kube-system weave-net-j6p62 2/2 Running 0 22m
now we start to fail....
in journalctl -f i noticed this Aug 09 16:40:27 cube-node1 dockerd[268]: time="2020-08-09T16:40:27.618336948Z" level=error msg="Handler for POST /v1.40/containers/b0e116e123f723d8a1a9f68aedf760326365aaceb6485bc6599b641ceb0c0b05/start returned error: OCI runtime create failed: container_linux.go:345: starting container process caused \"apply caps: operation not permitted\": unknown"
so if cube-k8s can run weave, why cannot an injected cube-k8s as cube-node ??
Im would have thought this to be simple... :) any insight ?
ive added journalctl -f logs from cube-node1
root@cube-node1:~# journalctl -f
-- Logs begin at Sun 2020-08-09 16:30:07 UTC. --
Aug 09 16:40:27 cube-node1 systemd[128]: var-lib-docker-overlay2-723261dbf88bb0d6a3697ad13105b57b566eb5d198d96ac33e3c393a55a058d0-merged.mount: Succeeded.
Aug 09 16:40:27 cube-node1 dockerd[268]: time="2020-08-09T16:40:27.618309036Z" level=error msg="b0e116e123f723d8a1a9f68aedf760326365aaceb6485bc6599b641ceb0c0b05 cleanup: failed to delete container from containerd: no such container"
Aug 09 16:40:27 cube-node1 dockerd[268]: time="2020-08-09T16:40:27.618336948Z" level=error msg="Handler for POST /v1.40/containers/b0e116e123f723d8a1a9f68aedf760326365aaceb6485bc6599b641ceb0c0b05/start returned error: OCI runtime create failed: container_linux.go:345: starting container process caused \"apply caps: operation not permitted\": unknown"
Aug 09 16:40:27 cube-node1 kubelet[703]: E0809 16:40:27.619262 703 remote_runtime.go:248] StartContainer "b0e116e123f723d8a1a9f68aedf760326365aaceb6485bc6599b641ceb0c0b05" from runtime service failed: rpc error: code = Unknown desc = failed to start container "b0e116e123f723d8a1a9f68aedf760326365aaceb6485bc6599b641ceb0c0b05": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "apply caps: operation not permitted": unknown
Aug 09 16:40:27 cube-node1 kubelet[703]: E0809 16:40:27.619353 703 kuberuntime_manager.go:798] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.18.6,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-proxy-token-mjsh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-2vg6r_kube-system(d4463fe1-d07b-4377-ad89-a9993bff619f): RunContainerError: failed to start container "b0e116e123f723d8a1a9f68aedf760326365aaceb6485bc6599b641ceb0c0b05": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "apply caps: operation not permitted": unknown
Aug 09 16:40:27 cube-node1 kubelet[703]: E0809 16:40:27.619379 703 pod_workers.go:191] Error syncing pod d4463fe1-d07b-4377-ad89-a9993bff619f ("kube-proxy-2vg6r_kube-system(d4463fe1-d07b-4377-ad89-a9993bff619f)"), skipping: failed to "StartContainer" for "kube-proxy" with RunContainerError: "failed to start container \"b0e116e123f723d8a1a9f68aedf760326365aaceb6485bc6599b641ceb0c0b05\": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused \"apply caps: operation not permitted\": unknown"
Aug 09 16:40:29 cube-node1 kubelet[703]: W0809 16:40:29.777753 703 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Aug 09 16:40:31 cube-node1 kubelet[703]: E0809 16:40:31.329269 703 kubelet.go:2100] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 09 16:40:34 cube-node1 kubelet[703]: W0809 16:40:34.778305 703 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Aug 09 16:40:36 cube-node1 kubelet[703]: E0809 16:40:36.340841 703 kubelet.go:2100] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 09 16:40:39 cube-node1 kubelet[703]: I0809 16:40:39.061491 703 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: aae9b8182ea6643f992adb97995861ec448e94be17ebf3be49c34b50aafe049d
Aug 09 16:40:39 cube-node1 kubelet[703]: I0809 16:40:39.061681 703 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 8718abc3dd84e67f277051063dfe04558d316d53e8d9d7c9806882e9f8cd53c3
Aug 09 16:40:39 cube-node1 dockerd[268]: time="2020-08-09T16:40:39.076929124Z" level=warning msg="Your kernel does not support CPU cfs period or the cgroup is not mounted. Period discarded."
Aug 09 16:40:39 cube-node1 systemd[1]: var-lib-docker-overlay2-dc6e918ee74eb4765069e9865dcf45dd92cb3bbfe8eb48d98f5150998d4a04b7\x2dinit-merged.mount: Succeeded.
Aug 09 16:40:39 cube-node1 systemd[128]: var-lib-docker-overlay2-dc6e918ee74eb4765069e9865dcf45dd92cb3bbfe8eb48d98f5150998d4a04b7\x2dinit-merged.mount: Succeeded.
Aug 09 16:40:39 cube-node1 dockerd[278]: time="2020-08-09T16:40:39.179032160Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a8ddf12c8890b5a382fb9c20435cc756a04841d375cabf39683533ab0f3af967/shim.sock" debug=false pid=1698
Aug 09 16:40:39 cube-node1 dockerd[278]: time="2020-08-09T16:40:39.220599412Z" level=info msg="shim reaped" id=a8ddf12c8890b5a382fb9c20435cc756a04841d375cabf39683533ab0f3af967
Aug 09 16:40:39 cube-node1 dockerd[268]: time="2020-08-09T16:40:39.230820645Z" level=error msg="stream copy error: reading from a closed fifo"
Aug 09 16:40:39 cube-node1 dockerd[268]: time="2020-08-09T16:40:39.230849052Z" level=error msg="stream copy error: reading from a closed fifo"
Aug 09 16:40:39 cube-node1 systemd[1]: var-lib-docker-overlay2-dc6e918ee74eb4765069e9865dcf45dd92cb3bbfe8eb48d98f5150998d4a04b7-merged.mount: Succeeded.
Aug 09 16:40:39 cube-node1 systemd[128]: var-lib-docker-overlay2-dc6e918ee74eb4765069e9865dcf45dd92cb3bbfe8eb48d98f5150998d4a04b7-merged.mount: Succeeded.
Aug 09 16:40:39 cube-node1 dockerd[268]: time="2020-08-09T16:40:39.268142289Z" level=error msg="a8ddf12c8890b5a382fb9c20435cc756a04841d375cabf39683533ab0f3af967 cleanup: failed to delete container from containerd: no such container"
Aug 09 16:40:39 cube-node1 dockerd[268]: time="2020-08-09T16:40:39.268189226Z" level=error msg="Handler for POST /v1.40/containers/a8ddf12c8890b5a382fb9c20435cc756a04841d375cabf39683533ab0f3af967/start returned error: OCI runtime create failed: container_linux.go:345: starting container process caused \"apply caps: operation not permitted\": unknown"
Aug 09 16:40:39 cube-node1 kubelet[703]: E0809 16:40:39.270257 703 remote_runtime.go:248] StartContainer "a8ddf12c8890b5a382fb9c20435cc756a04841d375cabf39683533ab0f3af967" from runtime service failed: rpc error: code = Unknown desc = failed to start container "a8ddf12c8890b5a382fb9c20435cc756a04841d375cabf39683533ab0f3af967": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "apply caps: operation not permitted": unknown
Aug 09 16:40:39 cube-node1 kubelet[703]: E0809 16:40:39.270404 703 kuberuntime_manager.go:798] container &Container{Name:weave,Image:docker.io/weaveworks/weave-kube:2.7.0,Command:[/home/weave/launch.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOSTNAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{50 -3} {
can someone provide me an architectural overview of the overc boot process ? x86_64 uses grub, and boots what / where / how is the rest of the stack initialized to spawn all the containers my guess would be via some init script / systemd process and lxc ? i dont need a dev doc as i understand it, just a description of the process.
simply put, ive so far worked out the x86_64 so now im on to making this work/boot on an ARMv8 device also via u-boot