Mirantis / virtlet

Kubernetes CRI implementation for running VM workloads
Apache License 2.0
739 stars 128 forks source link

Can't find containers of virtlet-cfr pod #878

Closed changyi2409 closed 5 years ago

changyi2409 commented 5 years ago

Hi, I used the virtlet demo.sh to deloy one demo cluster with two nodes for cirros vm running. The system is work fine. But I can't find related containers of virtlet-cfr72 pod both in worker node. It seems that there is only k8s related containers in worker node.

root@kube-master:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kube-master Ready master 6d22h v1.13.0 kube-node-1 Ready 6d21h v1.13.0

root@kube-master:~# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-whwrn 1/1 Running 0 6d22h etcd-kube-master 1/1 Running 1 6d22h kube-apiserver-kube-master 1/1 Running 1 6d22h kube-controller-manager-kube-master 1/1 Running 1 6d21h kube-proxy-5rv6d 1/1 Running 0 6d22h kube-proxy-bgrvq 1/1 Running 1 6d22h kube-scheduler-kube-master 1/1 Running 1 6d21h kubernetes-dashboard-769df5545f-cq7xh 1/1 Running 0 6d22h virtlet-cfr72 3/3 Running 1 6d21h

root@kube-master:~# kubectl describe pod virtlet-cfr72 -n kube-system Name: virtlet-cfr72 Namespace: kube-system Priority: 0 PriorityClassName: Node: kube-node-1/10.192.0.3 ...

root@kube-node-1:/var/lib# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a0965c44ac7f gcr.io/google_containers/kubernetes-dashboard-amd64 "/dashboard --port=9…" 6 days ago Up 6 days k8s_kubernetes-dashboard_kubernetes-dashboard-769df5545f-cq7xh_kube-system_e8999c8b-6a69-11e9-b6ca-367fafc6230c_0 e058736d09be k8s.gcr.io/coredns "/coredns -conf /etc…" 6 days ago Up 6 days k8s_coredns_coredns-86c58d9df4-whwrn_kube-system_e89988e9-6a69-11e9-b6ca-367fafc6230c_0 e696dcef19df k8s.gcr.io/pause:3.1 "/pause" 6 days ago Up 6 days k8s_POD_coredns-86c58d9df4-whwrn_kube-system_e89988e9-6a69-11e9-b6ca-367fafc6230c_0 a1624c2a9d28 k8s.gcr.io/pause:3.1 "/pause" 6 days ago Up 6 days k8s_POD_kubernetes-dashboard-769df5545f-cq7xh_kube-system_e8999c8b-6a69-11e9-b6ca-367fafc6230c_0 a0df02310356 eca69a070500 "/usr/local/bin/kube…" 6 days ago Up 6 days k8s_kube-proxy_kube-proxy-5rv6d_kube-system_be55853d-6a69-11e9-8c22-367fafc6230c_1 797516fd0fa3 k8s.gcr.io/pause:3.1 "/pause" 6 days ago Up 6 days k8s_POD_kube-proxy-5rv6d_kube-system_be55853d-6a69-11e9-8c22-367fafc6230c_1

changyi2409 commented 5 years ago

Is there any guideline for users to update the image url links? Update the configmap yaml and recreate configmap and virtlet Pod to enable new image translation links?

Following is the default image links for virlet image translation.

root@kube-master:~# kubectl get configmap -n kube-system virtlet-image-translations -o yaml apiVersion: v1 data: demo_images.yaml: | translations:

jellonek commented 5 years ago

Yep, exactly. Actually virtlet is loading translations config (from config map and from CRDs) only once, at the start, so after updating values in CRD or config map - operator needs to recreate all virtlet pods instances (on all nodes), what can be done using e.g.:

kubectl -n kube-system delete pods --all -l runtime=virtlet
jellonek commented 5 years ago

Ups. This info was incorrect - translation configuration (combined from crd and configmap files) should be reloaded on each pod definition, so there should be no need to kill existing instances of virtlet pods.

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.