Closed blueworm-lee closed 3 years ago
안녕하세요
저희가 제공하는 쿠버네티스 버전은 1.18.4
입니다.
모든 버전에서 테스트하여 안정적인 결과를 보장하긴 어렵습니다.
제공하는 버전으로 테스트하시여 결과를 공유해 주시겠어요?
조훈 드림.
다음과 같이 문제가 없습니다.
[root@m-k8s ~]# k get nodes
NAME STATUS ROLES AGE VERSION
m-k8s Ready master 11m v1.18.4
w1-k8s Ready <none> 8m20s v1.18.4
w2-k8s Ready <none> 5m42s v1.18.4
w3-k8s Ready <none> 3m9s v1.18.4
[root@m-k8s ~]# k apply -f ~/_Book_k8sInfra/ch3/3.3.2/ingress-nginx.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
[root@m-k8s ~]# k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-5bb8fb4bb6-x8nxr 1/1 Running 0 2m3s
kube-system calico-kube-controllers-99c9b6f64-tg29d 1/1 Running 0 11m
kube-system calico-node-gjghq 1/1 Running 0 11m
kube-system calico-node-p7xc7 1/1 Running 0 5m56s
kube-system calico-node-qrxqh 1/1 Running 0 3m23s
kube-system calico-node-tcj5x 1/1 Running 0 8m34s
kube-system coredns-66bff467f8-f5qld 1/1 Running 0 11m
kube-system coredns-66bff467f8-qps79 1/1 Running 0 11m
kube-system etcd-m-k8s 1/1 Running 0 11m
kube-system kube-apiserver-m-k8s 1/1 Running 0 11m
kube-system kube-controller-manager-m-k8s 1/1 Running 0 11m
kube-system kube-proxy-5xkvj 1/1 Running 0 8m34s
kube-system kube-proxy-b2chr 1/1 Running 0 5m56s
kube-system kube-proxy-hzg9x 1/1 Running 0 11m
kube-system kube-proxy-vgwsf 1/1 Running 0 3m23s
kube-system kube-scheduler-m-k8s 1/1 Running 0 11m
따로 말씀이 없으셔서 close 하도록 하겠습니다. 감사합니다.
테스트용 Pod 생성 완료 (정상) [root@localhost _Book_k8sInfra]# kubectl create deployment in-hname-pod --image=sysnet4admin/echo-hname deployment.apps/in-hname-pod created [root@localhost _Book_k8sInfra]# kubectl create deployment in-ip-pod --image=sysnet4admin/echo-ip deployment.apps/in-ip-pod created [root@localhost #_Book_k8sInfra]#
테스트용 Pod 상태 확인 (정상) [root@localhost _Book_k8sInfra]# kubectl get pods NAME READY STATUS RESTARTS AGE in-hname-pod-97d9d67d7-m76ht 1/1 Running 0 3m32s in-ip-pod-67847c75df-qwzvw 1/1 Running 0 3m27s [root@localhost _Book_k8sInfra]#
Nginx-Ingress 설치 (정상) [root@localhost _Book_k8sInfra]# kubectl apply -f ./ch3/3.3.2/ingress-nginx.yaml namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role role.rbac.authorization.k8s.io/nginx-ingress-role created Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created deployment.apps/nginx-ingress-controller created limitrange/ingress-nginx created [root@localhost _Book_k8sInfra]#
Nginx-Ingress Pod 상태 확인(restart 됨)
[root@localhost _Book_k8sInfra]# kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/nginx-ingress-controller-65886f4f5d-drltr 0/1 CrashLoopBackOff 7 14m
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-ingress-controller 0/1 1 0 14m
NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-ingress-controller-65886f4f5d 1 1 0 14m [root@localhost _Book_k8sInfra]#
[root@localhost _Book_k8sInfra]# kubectl describe pod nginx-ingress-controller-65886f4f5d-drltr -n ingress-nginx Name: nginx-ingress-controller-65886f4f5d-drltr Namespace: ingress-nginx Priority: 0 Node: node2/192.168.110.92 Start Time: Tue, 24 Aug 2021 14:02:26 +0900 Labels: app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx pod-template-hash=65886f4f5d Annotations: cni.projectcalico.org/podIP: 192.168.104.51/32 kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container nginx-ingress-controller prometheus.io/port: 10254 prometheus.io/scrape: true Status: Running IP: 192.168.104.51 IPs: IP: 192.168.104.51 Controlled By: ReplicaSet/nginx-ingress-controller-65886f4f5d Containers: nginx-ingress-controller: Container ID: docker://ea2f118f74df5b82982923f37111eaa6ac80387f65760dcbd8b4d8a1d234c477 Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0 Image ID: docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:b312c91d0de688a21075078982b5e3a48b13b46eda4df743317d3059fc3ca0d9 Ports: 80/TCP, 443/TCP Host Ports: 0/TCP, 0/TCP Args: /nginx-ingress-controller --configmap=$(POD_NAMESPACE)/nginx-configuration --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services --udp-services-configmap=$(POD_NAMESPACE)/udp-services --publish-service=$(POD_NAMESPACE)/ingress-nginx --annotations-prefix=nginx.ingress.kubernetes.io State: Running Started: Tue, 24 Aug 2021 14:04:26 +0900 Last State: Terminated Reason: Error Exit Code: 143 Started: Tue, 24 Aug 2021 14:03:56 +0900 Finished: Tue, 24 Aug 2021 14:04:26 +0900 Ready: False Restart Count: 3 Requests: cpu: 100m memory: 90Mi Liveness: http-get http://:10254/healthz delay=10s timeout=10s period=10s #success=1 #failure=3 Readiness: http-get http://:10254/healthz delay=0s timeout=10s period=10s #success=1 #failure=3 Environment: POD_NAME: nginx-ingress-controller-65886f4f5d-drltr (v1:metadata.name) POD_NAMESPACE: ingress-nginx (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ph8zf (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-ph8zf: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 2m3s default-scheduler Successfully assigned ingress-nginx/nginx-ingress-controller-65886f4f5d-drltr to node2 Normal Pulling 2m2s kubelet Pulling image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0" Normal Pulled 101s kubelet Successfully pulled image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0" in 21.35022518s Normal Created 63s (x2 over 100s) kubelet Created container nginx-ingress-controller Normal Started 63s (x2 over 100s) kubelet Started container nginx-ingress-controller Normal Killing 63s kubelet Container nginx-ingress-controller failed liveness probe, will be restarted Warning FailedPreStopHook 63s kubelet Exec lifecycle hook ([/wait-shutdown]) for Container "nginx-ingress-controller" in Pod "nginx-ingress-controller-65886f4f5d-drltr_ingress-nginx(4b7c49a2-4cc8-4412-9b7a-2ed785b1240f)" failed - error: command '/wait-shutdown' exited with 137: , message: "" Normal Pulled 63s kubelet Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0" already present on machine Warning Unhealthy 33s (x10 over 99s) kubelet Readiness probe failed: Get "http://192.168.104.51:10254/healthz": dial tcp 192.168.104.51:10254: connect: connection refused Warning Unhealthy 33s (x6 over 83s) kubelet Liveness probe failed: Get "http://192.168.104.51:10254/healthz": dial tcp 192.168.104.51:10254: connect: connection refused [root@localhost _Book_k8sInfra]#
참고) [root@localhost _Book_k8sInfra]# kubectl version Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"} [root@localhost _Book_k8sInfra]#