Closed fengjian1585 closed 6 years ago
could you kindly update the bug with the template?
It seems your sidecar injector pod is not running? can you double check it is running?
cc @ayj @yusuoh
sorry,I'm not very useful.
[root@master1 istio-0.7.1]# kubectl -n istio-system get deployment -listio=sidecar-injector NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE istio-sidecar-injector 1 1 1 1 37m
##########
deployment NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE istio-system istio-ca-86f55cc46f-cz2vl 1/1 Running 0 40m 10.233.53.4 node2 istio-system istio-ingress-868d5f978b-ts77q 1/1 Running 0 40m 10.233.63.5 node1 istio-system istio-mixer-65dc5549d6-2zngs 3/3 Running 0 40m 10.233.63.4 node1 istio-system istio-pilot-657cb5ddf7-kw8pv 2/2 Running 0 40m 10.233.53.3 node2 istio-system istio-sidecar-injector-5b8c78fd6-bktlt 1/1 Running 0 37m 10.233.63.6 node1
@fengjian1585, is this still an issue?
@ayj I have the same issue !
Also have the same error message (istio-release-0.8-20180519-22-09).
Any updates on this as I am experiencing the same issue
@milosradovanovic @kirgene @h4ckroot @fengjian1585 please cloud you upgrade to 0.8 and suggest if you still notice this issue.
@sakshigoel12 I've just checked with 0.8 version and for me still everything is the same, no changes.
The following information would be useful to help characterize the nature of this error.
k8s version including any provider specific setup instructions (e.g. minikube startup arguments, kops config).
New Istio install vs. upgrade?
api-server metrics for the webhook
kubectl proxy &
curl -s localhost:8001/metrics | grep sidecar-injector
curl -s localhost:8001/logs/kube-apiserver.log | grep sidecar-injector
pod=$(kubectl -n istio-system get pod -listio=sidecar-injector -o jsonpath='{.items[0].metadata.name}')
kubectl -n istio-system logs ${pod}
Same issue here. GKE 1.10.2 Regional Cluster. Istio 0.7.1 w/ Istio Auth.
Pod Log:
2018-06-06T15:41:59.665332Z info version root@c5207293dc14-docker.io/istio-0.7.1-62110d4f0373a7613e57b8a4d559ded9cb6a1cc8-Clean
2018-06-06T15:41:59.667144Z info New configuration: sha256sum 89f303e89130ed85bd1ec065bde968ac524e134616b9f58552eea01b00505e5d
2018-06-06T15:41:59.667179Z info Policy: enabled
2018-06-06T15:41:59.667195Z info Template: |
initContainers:
- name: istio-init
image: docker.io/istio/proxy_init:0.7.1
args:
- "-p"
- {{ .MeshConfig.ProxyListenPort }}
- "-u"
- 1337
- -i
- 10.0.0.0/8
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_ADMIN
restartPolicy: Always
containers:
- name: istio-proxy
image: docker.io/istio/proxy:0.7.1
args:
- proxy
- sidecar
- --configPath
- {{ .ProxyConfig.ConfigPath }}
- --binaryPath
- {{ .ProxyConfig.BinaryPath }}
- --serviceCluster
{{ if ne "" (index .ObjectMeta.Labels "app") -}}
- {{ index .ObjectMeta.Labels "app" }}
{{ else -}}
- "istio-proxy"
{{ end -}}
- --drainDuration
- {{ formatDuration .ProxyConfig.DrainDuration }}
- --parentShutdownDuration
- {{ formatDuration .ProxyConfig.ParentShutdownDuration }}
- --discoveryAddress
- {{ .ProxyConfig.DiscoveryAddress }}
- --discoveryRefreshDelay
- {{ formatDuration .ProxyConfig.DiscoveryRefreshDelay }}
- --zipkinAddress
- {{ .ProxyConfig.ZipkinAddress }}
- --connectTimeout
- {{ formatDuration .ProxyConfig.ConnectTimeout }}
- --statsdUdpAddress
- {{ .ProxyConfig.StatsdUdpAddress }}
- --proxyAdminPort
- {{ .ProxyConfig.ProxyAdminPort }}
- --controlPlaneAuthPolicy
- {{ .ProxyConfig.ControlPlaneAuthPolicy }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
imagePullPolicy: IfNotPresent
securityContext:
privileged: false
readOnlyRootFilesystem: true
runAsUser: 1337
restartPolicy: Always
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
{{ if eq .Spec.ServiceAccountName "" -}}
secretName: istio.default
{{ else -}}
secretName: {{ printf "istio.%s" .Spec.ServiceAccountName }}
{{ end -}}
Same issue here, kubernetes v1.9.6, istio 0.8.0, is New Istio install.
# cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota \
--advertise-address=192.168.5.7 \
--bind-address=192.168.5.7 \
--insecure-bind-address=127.0.0.1 \
--insecure-port=8080 \
--secure-port=443 \
--authorization-mode=Node,RBAC \
--runtime-config=rbac.authorization.k8s.io/v1alpha1 \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/token.csv \
--service-cluster-ip-range=10.254.0.0/16 \
--service-node-port-range=1-60000 \
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/etc/kubernetes/ssl/ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
--etcd-servers=https://192.168.5.7:2379,https://192.168.5.8:2379,https://192.168.5.86:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=1 \
--endpoint-reconciler-type=lease \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/lib/audit.log \
--event-ttl=1h \
--runtime-config=batch/v2alpha1=true \
--runtime-config=admissionregistration.k8s.io/v1alpha1=true \
--v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
mac-temp:istio-0.8.0 temp$ kubectl proxy & curl -s localhost:8001/metrics | grep sidecar-injector
Starting to serve on 127.0.0.1:8001
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="25000"} 0
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="62500"} 0
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="156250"} 0
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="390625"} 0
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="976562.5"} 0
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="+Inf"} 4
apiserver_admission_webhook_admission_latencies_seconds_sum{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1"} 1.20006957e+08
apiserver_admission_webhook_admission_latencies_seconds_count{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1"} 4
apiserver_request_count{client="sidecar-injector/v0.0.0 (linux/amd64) kubernetes/$Format",code="200",contentType="application/json",resource="mutatingwebhookconfigurations",scope="cluster",subresource="",verb="GET"} 1
apiserver_request_count{client="sidecar-injector/v0.0.0 (linux/amd64) kubernetes/$Format",code="200",contentType="application/json",resource="mutatingwebhookconfigurations",scope="cluster",subresource="",verb="PATCH"} 1
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST",le="0.001"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST",le="0.002"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST",le="0.004"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST",le="0.008"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST",le="0.016"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST",le="0.032"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST",le="0.064"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST",le="0.128"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST",le="0.256"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST",le="0.512"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST",le="+Inf"} 4
rest_client_request_latency_seconds_sum{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST"} 120.00553613299999
rest_client_request_latency_seconds_count{url="https://istio-sidecar-injector.istio-system.svc:443/inject",verb="POST"} 4
rest_client_requests_total{code="<error>",host="istio-sidecar-injector.istio-system.svc:443",method="POST"} 4
mac-temp:istio-0.8.0 temp$ curl -s localhost:8001/logs/kube-apiserver.log | grep sidecar-injector
mac-temp:istio-0.8.0 temp$ pod=$(kubectl -n istio-system get pod -listio=sidecar-injector -o jsonpath='{.items[0].metadata.name}')
mac-temp:istio-0.8.0 temp$
mac-temp:istio-0.8.0 temp$ kubectl -n istio-system logs ${pod}
2018-06-13T11:13:54.600881Z info version root@48d5ddfd72da-docker.io/istio-0.8.0-6f9f420f0c7119ff4fa6a1966a6f6d89b1b4db84-Clean
2018-06-13T11:13:54.603269Z info New configuration: sha256sum 6fb54ce8cab658754b93e2064ea052a1cd0682a6a385ea03d4ebe66707327e2a
2018-06-13T11:13:54.603269Z info Policy: enabled
2018-06-13T11:13:54.604163Z info Template: |
initContainers:
- name: istio-init
image: docker.io/istio/proxy_init:0.8.0
args:
- "-p"
- [[ .MeshConfig.ProxyListenPort ]]
- "-u"
- 1337
- "-m"
- [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
- "-i"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges" ]]"
[[ else -]]
- "*"
[[ end -]]
- "-x"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges" ]]"
[[ else -]]
- ""
[[ end -]]
- "-b"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts" ]]"
[[ else -]]
- [[ range .Spec.Containers -]][[ range .Ports -]][[ .ContainerPort -]], [[ end -]][[ end -]][[ end]]
- "-d"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts" ]]"
[[ else -]]
- ""
[[ end -]]
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
restartPolicy: Always
containers:
- name: istio-proxy
image: [[ if (isset .ObjectMeta.Annotations "sidecar.istio.io/proxyImage") -]]
"[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyImage" ]]"
[[ else -]]
docker.io/istio/proxyv2:0.8.0
[[ end -]]
args:
- proxy
- sidecar
- --configPath
- [[ .ProxyConfig.ConfigPath ]]
- --binaryPath
- [[ .ProxyConfig.BinaryPath ]]
- --serviceCluster
[[ if ne "" (index .ObjectMeta.Labels "app") -]]
- [[ index .ObjectMeta.Labels "app" ]]
[[ else -]]
- "istio-proxy"
[[ end -]]
- --drainDuration
- [[ formatDuration .ProxyConfig.DrainDuration ]]
- --parentShutdownDuration
- [[ formatDuration .ProxyConfig.ParentShutdownDuration ]]
- --discoveryAddress
- [[ .ProxyConfig.DiscoveryAddress ]]
- --discoveryRefreshDelay
- [[ formatDuration .ProxyConfig.DiscoveryRefreshDelay ]]
- --zipkinAddress
- [[ .ProxyConfig.ZipkinAddress ]]
- --connectTimeout
- [[ formatDuration .ProxyConfig.ConnectTimeout ]]
- --statsdUdpAddress
- [[ .ProxyConfig.StatsdUdpAddress ]]
- --proxyAdminPort
- [[ .ProxyConfig.ProxyAdminPort ]]
- --controlPlaneAuthPolicy
- [[ .ProxyConfig.ControlPlaneAuthPolicy ]]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
imagePullPolicy: IfNotPresent
securityContext:
privileged: false
readOnlyRootFilesystem: true
[[ if eq (or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String) "TPROXY" -]]
capabilities:
add:
- NET_ADMIN
[[ else -]]
runAsUser: 1337
[[ end -]]
restartPolicy: Always
resources:
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
[[ if eq .Spec.ServiceAccountName "" -]]
secretName: istio.default
[[ else -]]
secretName: [[ printf "istio.%s" .Spec.ServiceAccountName ]]
[[ end -]]
2018-06-13T11:13:54.605739Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I also have this issue with a new EKS Kubernetes 1.10.3 istio 0.8.0 installation trying to run bookinfo example.
Pod Log:
2018-06-19T17:24:21.338177Z info version root@48d5ddfd72da-docker.io/istio-0.8.0-6f9f420f0c7119ff4fa6a1966a6f6d89b1b4db84-Clean
2018-06-19T17:24:21.339446Z info New configuration: sha256sum 6fb54ce8cab658754b93e2064ea052a1cd0682a6a385ea03d4ebe66707327e2a
2018-06-19T17:24:21.339466Z info Policy: enabled
2018-06-19T17:24:21.339491Z info Template: |
initContainers:
- name: istio-init
image: docker.io/istio/proxy_init:0.8.0
args:
- "-p"
- [[ .MeshConfig.ProxyListenPort ]]
- "-u"
- 1337
- "-m"
- [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
- "-i"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges" ]]"
[[ else -]]
- "*"
[[ end -]]
- "-x"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges" ]]"
[[ else -]]
- ""
[[ end -]]
- "-b"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts" ]]"
[[ else -]]
- [[ range .Spec.Containers -]][[ range .Ports -]][[ .ContainerPort -]], [[ end -]][[ end -]][[ end]]
- "-d"
[[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts") -]]
- "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts" ]]"
[[ else -]]
- ""
[[ end -]]
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
restartPolicy: Always
containers:
- name: istio-proxy
image: [[ if (isset .ObjectMeta.Annotations "sidecar.istio.io/proxyImage") -]]
"[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyImage" ]]"
[[ else -]]
docker.io/istio/proxyv2:0.8.0
[[ end -]]
args:
- proxy
- sidecar
- --configPath
- [[ .ProxyConfig.ConfigPath ]]
- --binaryPath
- [[ .ProxyConfig.BinaryPath ]]
- --serviceCluster
[[ if ne "" (index .ObjectMeta.Labels "app") -]]
- [[ index .ObjectMeta.Labels "app" ]]
[[ else -]]
- "istio-proxy"
[[ end -]]
- --drainDuration
- [[ formatDuration .ProxyConfig.DrainDuration ]]
- --parentShutdownDuration
- [[ formatDuration .ProxyConfig.ParentShutdownDuration ]]
- --discoveryAddress
- [[ .ProxyConfig.DiscoveryAddress ]]
- --discoveryRefreshDelay
- [[ formatDuration .ProxyConfig.DiscoveryRefreshDelay ]]
- --zipkinAddress
- [[ .ProxyConfig.ZipkinAddress ]]
- --connectTimeout
- [[ formatDuration .ProxyConfig.ConnectTimeout ]]
- --statsdUdpAddress
- [[ .ProxyConfig.StatsdUdpAddress ]]
- --proxyAdminPort
- [[ .ProxyConfig.ProxyAdminPort ]]
- --controlPlaneAuthPolicy
- [[ .ProxyConfig.ControlPlaneAuthPolicy ]]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
imagePullPolicy: IfNotPresent
securityContext:
privileged: false
readOnlyRootFilesystem: true
[[ if eq (or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String) "TPROXY" -]]
capabilities:
add:
- NET_ADMIN
[[ else -]]
runAsUser: 1337
[[ end -]]
restartPolicy: Always
resources:
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
[[ if eq .Spec.ServiceAccountName "" -]]
secretName: istio.default
[[ else -]]
secretName: [[ printf "istio.%s" .Spec.ServiceAccountName ]]
[[ end -]]
2018-06-19T17:24:21.340174Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I have the same issue when I was trying EKS.
So K8S 1.10.3, and my setup use the with mutual TLS authentication ( kubectl apply -f install/kubernetes/istio-demo-auth.yaml
) Error like other is :
Error creating: Internal error occurred: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Some debug thing I did
istio-injection=enabled
label everything is correctistio-injection=enabled
on a new namespace and deploying ===> ERRORcurl https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s
on a pod without injection, result is OK (connection, return someting).Now my 1st thought: Is it possible that istio detect that a pod from a namespace should have a proxy, so it enable some kind of egress rule / mutual tls on it, but as the proxy is missing => error ?
If I try curl https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s on a pod without injection, result is OK (connection, return something).
Possibly related to https://github.com/istio/istio/issues/6069. You can confirm by checking the caBundle
in the istio-sidecar-injector's mutatingwebhookconfiguration (kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml
). It should be non-empty.
@4220182 This is likely not your problem, however, your kube-apiserver configuration is incorrect. Your runtime-config options are wrong. How did you deploy your Kubernetes system?
It seems like a whole slew of people are experiencing this problem on EKS. Has anyone experienced this problem with 0.8.0 of Kubernetes on a different platform?
Cheers -steve
@ayj I can confirm some of @GregoireW 's report.
Reproducer:
The bookinfo sample pods do not start and are not visible via kubectl get pods --all-namespaces
. I waited for 5-10 minutes for the pods to display.
Unfortunately the control plane for EKS does not appear completely visible e.g.:
sdake@falkor-07:~/istio-0.8.0/samples/bookinfo/kube$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
istio-system grafana-6f6dff9986-qgvwc 1/1 Running 0 11m
istio-system istio-citadel-7bdc7775c7-ndjqx 1/1 Running 0 11m
istio-system istio-cleanup-old-ca-qwkdw 0/1 Completed 0 11m
istio-system istio-egressgateway-78dd788b6d-hrk4g 1/1 Running 0 11m
istio-system istio-ingressgateway-7dd84b68d6-6qq2n 1/1 Running 0 11m
istio-system istio-mixer-post-install-k6zft 0/1 Completed 0 11m
istio-system istio-pilot-d5bbc5c59-g5vqb 2/2 Running 0 11m
istio-system istio-policy-64595c6fff-mqj59 2/2 Running 0 11m
istio-system istio-sidecar-injector-645c89bc64-nm42l 1/1 Running 0 11m
istio-system istio-statsd-prom-bridge-949999c4c-xtnqp 1/1 Running 0 11m
istio-system istio-telemetry-cfb674b6c-n6679 2/2 Running 0 11m
istio-system istio-tracing-754cdfd695-nprll 1/1 Running 0 11m
istio-system prometheus-86cb6dd77c-lwzcf 1/1 Running 0 11m
istio-system servicegraph-5849b7d696-6ddx8 1/1 Running 0 11m
kube-system aws-node-7wq8j 1/1 Running 1 15m
kube-system aws-node-h44q8 1/1 Running 1 15m
kube-system aws-node-jl52c 1/1 Running 0 15m
kube-system aws-node-tzhrh 1/1 Running 1 15m
kube-system kube-dns-7cc87d595-5h5v4 3/3 Running 0 34m
kube-system kube-proxy-56nt9 1/1 Running 0 15m
kube-system kube-proxy-65p74 1/1 Running 0 15m
kube-system kube-proxy-mzbqr 1/1 Running 0 15m
kube-system kube-proxy-qr6md 1/1 Running 0 15m
I'm going to see if I can get some introspection on the control plane next, however, this bug may mostly effect EKS and may be different from bugs others have reported.
It took me awhile to deploy EKS - its not super intuitive and the first 4 or 5 deploys failed completely, so its possible I have something wrong with the environment, although I can confirm the rest of Istio atleast starts up.
Also, this was tested (https://github.com/istio/old_issues_repo/issues/271#issuecomment-399618161) and returned a blob of auth and other metadata (so I think the bundle is still available).
Regards -steve
Tried GKE 1.10.4-gke.2 and istio 0.8.0 (deploy istio manually, not via helm), all works fine with changes from https://github.com/istio/istio/issues/6388
Got client timeout error with EKS 1.10 and istio 0.8.0 (manually and via helm). Looks like EKS issue
@alexmatsak I don't see any changes in https://github.com/istio/istio/issues/6388. Agree EKS looks to have a platform issue. I am not quite sure how to get logs out of EKS for the control plane. I think it is possible, but have not been successful thus far.
Logs from EKS are the next needed thing possibly using this documentation: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html
Unanswered forum question on capturing EKS logs: https://forums.aws.amazon.com/thread.jspa?messageID=854924󐮌
More debug information. The deployments are created, however, they are not creating pods:
sdake@falkor-07:~$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
details-v1 1 0 0 0 21h
productpage-v1 1 0 0 0 21h
ratings-v1 1 0 0 0 21h
reviews-v1 1 0 0 0 21h
reviews-v2 1 0 0 0 21h
reviews-v3 1 0 0 0 21h
sdake@falkor-07:~$ kubectl describe deployment productpage-v1
Name: productpage-v1
Namespace: default
CreationTimestamp: Sun, 24 Jun 2018 10:45:52 -0700
Labels: app=productpage
version=v1
Annotations: deployment.kubernetes.io/revision=1
Selector: app=productpage,version=v1
Replicas: 1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=productpage
version=v1
Containers:
productpage:
Image: istio/examples-bookinfo-productpage-v1:1.5.0
Port: 9080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
ReplicaFailure True FailedCreate
Progressing False ProgressDeadlineExceeded
OldReplicaSets: <none>
NewReplicaSet: productpage-v1-7bbdd59459 (0/1 replicas created)
Events: <none>
sdake@falkor-07:~$ kubectl get pods
No resources found.
More debug information:
sdake@falkor-07:~$ kubectl get replicasets
NAME DESIRED CURRENT READY AGE
details-v1-7b97668445 1 0 0 21h
productpage-v1-7bbdd59459 1 0 0 21h
ratings-v1-76dc7f6b9 1 0 0 21h
reviews-v1-64545d97b4 1 0 0 21h
reviews-v2-8cb9489c6 1 0 0 21h
reviews-v3-6bc884b456 1 0 0 21h
sdake@falkor-07:~$ kubectl describe replicaset productpage-v1-7bbdd59459
Name: productpage-v1-7bbdd59459
Namespace: default
Selector: app=productpage,pod-template-hash=3668815015,version=v1
Labels: app=productpage
pod-template-hash=3668815015
version=v1
Annotations: deployment.kubernetes.io/desired-replicas=1
deployment.kubernetes.io/max-replicas=2
deployment.kubernetes.io/revision=1
Controlled By: Deployment/productpage-v1
Replicas: 0 current / 1 desired
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=productpage
pod-template-hash=3668815015
version=v1
Containers:
productpage:
Image: istio/examples-bookinfo-productpage-v1:1.5.0
Port: 9080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
ReplicaFailure True FailedCreate
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 15m (x94 over 21h) replicaset-controller Error creating: Internal error occurred: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Maybe unrelated, but we saw this exact same symptom when we tried to use IPVS on our clusters. It manifested in the kubernetes hosts being unable to communicate with the injector correctly.
In case it is related: https://github.com/kubernetes/kubernetes/pull/65388
Steps to check if IPVS is enabled:
look at ipvsadm -L
or iptables -L
and see if the kube rules are there
or look at the kube-proxy config file and see what the proxy mode is
The proxy option seems to indicate iptables, however, execing into the container yields a ipvs module loading error (below).
sdake@falkor-07:~$ kubectl describe po kube-proxy-56nt9 -n kube-system
Name: kube-proxy-56nt9
Namespace: kube-system
Node: ip-192-168-106-175.us-west-2.compute.internal/192.168.106.175
Start Time: Sun, 24 Jun 2018 10:23:08 -0700
Labels: controller-revision-hash=1941829153
k8s-app=kube-proxy
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 192.168.106.175
Controlled By: DaemonSet/kube-proxy
Containers:
kube-proxy:
Container ID: docker://6e2e6f0528b7f407c3bfdab5ca6ea67909645a4ffb136f64220b70b94fe35932
Image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.10.3
Image ID: docker-pullable://602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy@sha256:76927fb03bd6b37be4330c356e95bcac16ee6961a12da7b7e6ffa50db376438c
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
kube-proxy --resource-container="" --oom-score-adj=-998 --master=https://bd99100e8c3a85d50b6f2418de632cd7.yl4.us-west-2.eks.amazonaws.com --kubeconfig=/var/lib/kube-proxy/kubeconfig --proxy-mode=iptables --v=2 1>>/var/log/kube-proxy.log 2>&1
execing into kube-proxy
Flag --resource-container has been deprecated, This feature will be removed in a later release.
I0624 17:23:12.356566 5 flags.go:27] FLAG: --alsologtostderr="false"
I0624 17:23:12.356604 5 flags.go:27] FLAG: --bind-address="0.0.0.0"
I0624 17:23:12.356611 5 flags.go:27] FLAG: --cleanup="false"
I0624 17:23:12.356617 5 flags.go:27] FLAG: --cleanup-iptables="false"
I0624 17:23:12.356622 5 flags.go:27] FLAG: --cleanup-ipvs="true"
I0624 17:23:12.356626 5 flags.go:27] FLAG: --cluster-cidr=""
I0624 17:23:12.356632 5 flags.go:27] FLAG: --config=""
I0624 17:23:12.356636 5 flags.go:27] FLAG: --config-sync-period="15m0s"
I0624 17:23:12.356642 5 flags.go:27] FLAG: --conntrack-max="0"
I0624 17:23:12.356650 5 flags.go:27] FLAG: --conntrack-max-per-core="32768"
I0624 17:23:12.356654 5 flags.go:27] FLAG: --conntrack-min="131072"
I0624 17:23:12.356659 5 flags.go:27] FLAG: --conntrack-tcp-timeout-close-wait="1h0m0s"
I0624 17:23:12.356664 5 flags.go:27] FLAG: --conntrack-tcp-timeout-established="24h0m0s"
I0624 17:23:12.356668 5 flags.go:27] FLAG: --feature-gates=""
I0624 17:23:12.356675 5 flags.go:27] FLAG: --healthz-bind-address="0.0.0.0:10256"
I0624 17:23:12.356688 5 flags.go:27] FLAG: --healthz-port="10256"
I0624 17:23:12.356693 5 flags.go:27] FLAG: --help="false"
I0624 17:23:12.356697 5 flags.go:27] FLAG: --hostname-override=""
I0624 17:23:12.356701 5 flags.go:27] FLAG: --iptables-masquerade-bit="14"
I0624 17:23:12.356706 5 flags.go:27] FLAG: --iptables-min-sync-period="0s"
I0624 17:23:12.356710 5 flags.go:27] FLAG: --iptables-sync-period="30s"
I0624 17:23:12.356714 5 flags.go:27] FLAG: --ipvs-min-sync-period="0s"
I0624 17:23:12.356743 5 flags.go:27] FLAG: --ipvs-scheduler=""
I0624 17:23:12.356751 5 flags.go:27] FLAG: --ipvs-sync-period="30s"
I0624 17:23:12.356755 5 flags.go:27] FLAG: --kube-api-burst="10"
I0624 17:23:12.356759 5 flags.go:27] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0624 17:23:12.356764 5 flags.go:27] FLAG: --kube-api-qps="5"
I0624 17:23:12.356771 5 flags.go:27] FLAG: --kubeconfig="/var/lib/kube-proxy/kubeconfig"
I0624 17:23:12.356776 5 flags.go:27] FLAG: --log-backtrace-at=":0"
I0624 17:23:12.356783 5 flags.go:27] FLAG: --log-dir=""
I0624 17:23:12.356788 5 flags.go:27] FLAG: --log-flush-frequency="5s"
I0624 17:23:12.356792 5 flags.go:27] FLAG: --logtostderr="true"
I0624 17:23:12.356797 5 flags.go:27] FLAG: --masquerade-all="false"
I0624 17:23:12.356801 5 flags.go:27] FLAG: --master="https://bd99100e8c3a85d50b6f2418de632cd7.yl4.us-west-2.eks.amazonaws.com"
I0624 17:23:12.356808 5 flags.go:27] FLAG: --metrics-bind-address="127.0.0.1:10249"
I0624 17:23:12.356812 5 flags.go:27] FLAG: --nodeport-addresses="[]"
I0624 17:23:12.356823 5 flags.go:27] FLAG: --oom-score-adj="-998"
I0624 17:23:12.356827 5 flags.go:27] FLAG: --profiling="false"
I0624 17:23:12.356832 5 flags.go:27] FLAG: --proxy-mode="iptables"
I0624 17:23:12.356838 5 flags.go:27] FLAG: --proxy-port-range=""
I0624 17:23:12.356843 5 flags.go:27] FLAG: --resource-container=""
I0624 17:23:12.356847 5 flags.go:27] FLAG: --stderrthreshold="2"
I0624 17:23:12.356852 5 flags.go:27] FLAG: --udp-timeout="250ms"
I0624 17:23:12.356856 5 flags.go:27] FLAG: --v="2"
I0624 17:23:12.356861 5 flags.go:27] FLAG: --version="false"
I0624 17:23:12.356867 5 flags.go:27] FLAG: --vmodule=""
I0624 17:23:12.356872 5 flags.go:27] FLAG: --write-config-to=""
W0624 17:23:12.356879 5 server.go:195] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0624 17:23:12.356913 5 feature_gate.go:226] feature gates: &{{} map[]}
I0624 17:23:12.359034 5 iptables.go:198] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
time="2018-06-24T17:23:12Z" level=warning msg="Running modprobe ip_vs failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.14.42-61.37.amzn2.x86_64/modules.dep.bin'\nmodprobe: WARNING: Module ip_vs not found in directory /lib/modules/4.14.42-61.37.amzn2.x86_64`, error: exit status 1"
time="2018-06-24T17:23:12Z" level=error msg="Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalancing will not work until this is fixed."
I0624 17:23:12.369734 5 server_others.go:140] Using iptables Proxier.
W0624 17:23:12.405993 5 proxier.go:311] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0624 17:23:12.406074 5 server_others.go:174] Tearing down inactive rules.
I0624 17:23:12.597653 5 server.go:444] Version: v1.10.3
I0624 17:23:12.598263 5 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0624 17:23:12.598307 5 conntrack.go:52] Setting nf_conntrack_max to 131072
I0624 17:23:12.598413 5 mount_linux.go:196] Detected OS without systemd
I0624 17:23:12.598597 5 conntrack.go:83] Setting conntrack hashsize to 32768
I0624 17:23:12.617574 5 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0624 17:23:12.617720 5 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0624 17:23:12.618008 5 config.go:102] Starting endpoints config controller
I0624 17:23:12.618021 5 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller
I0624 17:23:12.618053 5 config.go:202] Starting service config controller
I0624 17:23:12.618059 5 controller_utils.go:1019] Waiting for caches to sync for service config controller
I0624 17:23:12.718214 5 controller_utils.go:1026] Caches are synced for service config controller
I0624 17:23:12.718275 5 proxier.go:623] Not syncing iptables until Services and Endpoints have been received from master
I0624 17:23:12.718291 5 controller_utils.go:1026] Caches are synced for endpoints config controller
I0624 17:23:12.718337 5 service.go:310] Adding new service port "kube-system/kube-dns:dns" at 10.100.0.10:53/UDP
I0624 17:23:12.718359 5 service.go:310] Adding new service port "kube-system/kube-dns:dns-tcp" at 10.100.0.10:53/TCP
I0624 17:23:12.718371 5 service.go:310] Adding new service port "default/kubernetes:https" at 10.100.0.1:443/TCP
I0624 17:24:09.514157 5 proxier.go:637] Stale udp service kube-system/kube-dns:dns -> 10.100.0.10
I0624 17:27:00.382157 5 service.go:310] Adding new service port "istio-system/istio-egressgateway:http" at 10.100.81.81:80/TCP
I0624 17:27:00.382197 5 service.go:310] Adding new service port "istio-system/istio-egressgateway:https" at 10.100.81.81:443/TCP
I0624 17:27:00.415484 5 service.go:310] Adding new service port "istio-system/grafana:http" at 10.100.14.255:3000/TCP
I0624 17:27:00.469614 5 service.go:310] Adding new service port "istio-system/istio-ingressgateway:http" at 10.100.155.159:80/TCP
I0624 17:27:00.469641 5 service.go:310] Adding new service port "istio-system/istio-ingressgateway:https" at 10.100.155.159:443/TCP
I0624 17:27:00.469652 5 service.go:310] Adding new service port "istio-system/istio-ingressgateway:tcp" at 10.100.155.159:31400/TCP
I0624 17:27:00.482924 5 proxier.go:1372] Opened local port "nodePort for istio-system/istio-ingressgateway:https" (:31390/tcp)
I0624 17:27:00.482986 5 proxier.go:1372] Opened local port "nodePort for istio-system/istio-ingressgateway:http" (:31380/tcp)
I0624 17:27:00.483012 5 proxier.go:1372] Opened local port "nodePort for istio-system/istio-ingressgateway:tcp" (:31400/tcp)
I0624 17:27:00.504535 5 service.go:310] Adding new service port "istio-system/istio-policy:grpc-mixer-mtls" at 10.100.175.154:15004/TCP
I0624 17:27:00.504558 5 service.go:310] Adding new service port "istio-system/istio-policy:http-monitoring" at 10.100.175.154:9093/TCP
I0624 17:27:00.504577 5 service.go:310] Adding new service port "istio-system/istio-policy:grpc-mixer" at 10.100.175.154:9091/TCP
I0624 17:27:00.539703 5 service.go:310] Adding new service port "istio-system/istio-telemetry:grpc-mixer" at 10.100.71.96:9091/TCP
I0624 17:27:00.539726 5 service.go:310] Adding new service port "istio-system/istio-telemetry:grpc-mixer-mtls" at 10.100.71.96:15004/TCP
I0624 17:27:00.539744 5 service.go:310] Adding new service port "istio-system/istio-telemetry:http-monitoring" at 10.100.71.96:9093/TCP
I0624 17:27:00.539754 5 service.go:310] Adding new service port "istio-system/istio-telemetry:prometheus" at 10.100.71.96:42422/TCP
I0624 17:27:00.575381 5 service.go:310] Adding new service port "istio-system/istio-statsd-prom-bridge:statsd-prom" at 10.100.35.48:9102/TCP
I0624 17:27:00.575402 5 service.go:310] Adding new service port "istio-system/istio-statsd-prom-bridge:statsd-udp" at 10.100.35.48:9125/UDP
I0624 17:27:00.653021 5 service.go:310] Adding new service port "istio-system/istio-pilot:http-old-discovery" at 10.100.58.216:15003/TCP
I0624 17:27:00.653054 5 service.go:310] Adding new service port "istio-system/istio-pilot:https-discovery" at 10.100.58.216:15005/TCP
I0624 17:27:00.653064 5 service.go:310] Adding new service port "istio-system/istio-pilot:http-discovery" at 10.100.58.216:15007/TCP
I0624 17:27:00.653073 5 service.go:310] Adding new service port "istio-system/istio-pilot:grpc-xds" at 10.100.58.216:15010/TCP
I0624 17:27:00.653082 5 service.go:310] Adding new service port "istio-system/istio-pilot:https-xds" at 10.100.58.216:15011/TCP
I0624 17:27:00.653091 5 service.go:310] Adding new service port "istio-system/istio-pilot:http-legacy-discovery" at 10.100.58.216:8080/TCP
I0624 17:27:00.653100 5 service.go:310] Adding new service port "istio-system/istio-pilot:http-monitoring" at 10.100.58.216:9093/TCP
I0624 17:27:00.686356 5 service.go:310] Adding new service port "istio-system/prometheus:http-prometheus" at 10.100.80.253:9090/TCP
I0624 17:27:00.721812 5 service.go:310] Adding new service port "istio-system/istio-citadel:grpc-citadel" at 10.100.59.155:8060/TCP
I0624 17:27:00.721841 5 service.go:310] Adding new service port "istio-system/istio-citadel:http-monitoring" at 10.100.59.155:9093/TCP
I0624 17:27:00.755543 5 service.go:310] Adding new service port "istio-system/servicegraph:http" at 10.100.12.95:8088/TCP
I0624 17:27:00.790643 5 service.go:310] Adding new service port "istio-system/istio-sidecar-injector:" at 10.100.110.209:443/TCP
I0624 17:27:01.289411 5 service.go:310] Adding new service port "istio-system/zipkin:http" at 10.100.90.185:9411/TCP
I0624 17:27:01.331636 5 service.go:310] Adding new service port "istio-system/tracing:query-http" at 10.100.195.167:80/TCP
I0624 17:27:01.346768 5 proxier.go:1372] Opened local port "nodePort for istio-system/tracing:query-http" (:30709/tcp)
I0624 17:27:03.122656 5 service.go:312] Updating existing service port "istio-system/istio-ingressgateway:http" at 10.100.155.159:80/TCP
I0624 17:27:03.122695 5 service.go:312] Updating existing service port "istio-system/istio-ingressgateway:https" at 10.100.155.159:443/TCP
I0624 17:27:03.122707 5 service.go:312] Updating existing service port "istio-system/istio-ingressgateway:tcp" at 10.100.155.159:31400/TCP
I0624 17:27:05.178208 5 proxier.go:637] Stale udp service istio-system/istio-statsd-prom-bridge:statsd-udp -> 10.100.35.48
I0624 17:27:05.825662 5 service.go:312] Updating existing service port "istio-system/tracing:query-http" at 10.100.195.167:80/TCP
I0624 17:29:30.939232 5 service.go:310] Adding new service port "default/details:http" at 10.100.27.110:9080/TCP
I0624 17:29:31.079034 5 service.go:310] Adding new service port "default/ratings:http" at 10.100.87.113:9080/TCP
I0624 17:29:31.140160 5 service.go:310] Adding new service port "default/reviews:http" at 10.100.237.150:9080/TCP
I0624 17:29:31.255602 5 service.go:310] Adding new service port "default/productpage:http" at 10.100.37.218:9080/TCP
I0624 17:35:50.680421 5 service.go:335] Removing service port "default/details:http"
I0624 17:35:52.201910 5 service.go:335] Removing service port "default/ratings:http"
I0624 17:35:53.573137 5 service.go:335] Removing service port "default/reviews:http"
I0624 17:35:57.582514 5 service.go:335] Removing service port "default/productpage:http"
I0624 17:38:44.283057 5 service.go:310] Adding new service port "default/details:http" at 10.100.183.47:9080/TCP
I0624 17:38:44.343576 5 service.go:310] Adding new service port "default/ratings:http" at 10.100.23.152:9080/TCP
I0624 17:38:44.406267 5 service.go:310] Adding new service port "default/reviews:http" at 10.100.80.139:9080/TCP
I0624 17:38:44.521847 5 service.go:310] Adding new service port "default/productpage:http" at 10.100.217.177:9080/TCP
I0624 17:42:37.232958 5 service.go:335] Removing service port "default/details:http"
I0624 17:42:38.702769 5 service.go:335] Removing service port "default/ratings:http"
I0624 17:42:40.068814 5 service.go:335] Removing service port "default/reviews:http"
I0624 17:42:44.068680 5 service.go:335] Removing service port "default/productpage:http"
I0624 17:45:51.765009 5 service.go:310] Adding new service port "default/details:http" at 10.100.132.208:9080/TCP
I0624 17:45:51.829059 5 service.go:310] Adding new service port "default/ratings:http" at 10.100.67.175:9080/TCP
I0624 17:45:51.893858 5 service.go:310] Adding new service port "default/reviews:http" at 10.100.135.39:9080/TCP
I0624 17:45:52.018230 5 service.go:310] Adding new service port "default/productpage:http" at 10.100.173.193:9080/TCP
@ayj metric logs:
sdake@falkor-07:~$ curl -s localhost:8001/metrics | grep sidecar
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="25000"} 0
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="62500"} 0
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="156250"} 0
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="390625"} 0
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="976562.5"} 0
apiserver_admission_webhook_admission_latencies_seconds_bucket{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1",le="+Inf"} 2
apiserver_admission_webhook_admission_latencies_seconds_sum{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1"} 6.0001078e+07
apiserver_admission_webhook_admission_latencies_seconds_count{group="",name="sidecar-injector.istio.io",operation="CREATE",rejected="true",resource="pods",subresource="",type="admit",version="v1"} 2
apiserver_request_count{client="sidecar-injector/v0.0.0 (linux/amd64) kubernetes/$Format",code="200",contentType="application/json",resource="mutatingwebhookconfigurations",scope="cluster",subresource="",verb="GET"} 1
apiserver_request_count{client="sidecar-injector/v0.0.0 (linux/amd64) kubernetes/$Format",code="200",contentType="application/json",resource="mutatingwebhookconfigurations",scope="cluster",subresource="",verb="PATCH"} 1
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST",le="0.001"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST",le="0.002"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST",le="0.004"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST",le="0.008"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST",le="0.016"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST",le="0.032"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST",le="0.064"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST",le="0.128"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST",le="0.256"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST",le="0.512"} 0
rest_client_request_latency_seconds_bucket{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST",le="+Inf"} 2
rest_client_request_latency_seconds_sum{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST"} 60.000367064
rest_client_request_latency_seconds_count{url="https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s",verb="POST"} 2
rest_client_requests_total{code="<error>",host="istio-sidecar-injector.istio-system.svc:443",method="POST"} 2
The ipvs kernel error seems to be innocuous (see https://github.com/kubernetes/kubernetes/issues/61074).
@ayj there is no logs path:
sdake@falkor-07:~$ curl localhost:8001/logs/kube-apiserver.log
{
"paths": [
"/apis",
"/apis/",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/healthz",
"/healthz/etcd",
"/healthz/ping",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/metrics",
"/openapi/v2",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger.json",
"/swaggerapi",
"/version"
]
}sdake@falkor-07:~$
Not sure if this is also important from:
https://github.com/istio/old_issues_repo/issues/271#issuecomment-400006894
W0624 17:23:12.405993 5 proxier.go:311] clusterCIDR not specified, unable to distinguish between internal and external traffic
There are a couple of PRs in the pipeline that should fix observed webhook issues (failed calling admission webhook): https://github.com/istio/istio/pull/6435 https://github.com/istio/istio/pull/6610 Different root causes but same symptom at the highest level.
I deploy Kubernetes system follow : https://github.com/kelseyhightower/kubernetes-the-hard-way @sdake
@4220182 ok - well would you consider deploying with kubeadm or a tool? the hard way is about learning how to deploy Kubernetes, while kubeadm repeats the hard way consistently.
Cheers -steve
@ayj
Possibly related to istio/istio#6069. You can confirm by checking the caBundle in the istio-sidecar-injector's mutatingwebhookconfiguration (kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml). It should be non-empty.
kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml
return something, and there is a caBundle :
webhooks:
- clientConfig:
caBundle: LS0tLS1CRUdJ...
service:
name: istio-sidecar-injector
namespace: istio-system
path: /inject
the service exist:
$ k get svc -n istio-system istio-sidecar-injector -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
istio-sidecar-injector ClusterIP 10.100.197.12 <none> 443/TCP 3h istio=sidecar-injector
There is pods :
$ k get po -n istio-system -l istio=sidecar-injector
NAME READY STATUS RESTARTS AGE
istio-sidecar-injector-645c89bc64-tsl7b 1/1 Running 0 2h
But pod logs have at the end:
2018-06-26T16:25:53.794519Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2018-06-26T16:25:53.810402Z error Register webhook failed: mutatingwebhookconfigurations.admissionregistration.k8s.io "istio-sidecar-injector" not found. Retrying...
I deleted the pod to force a refresh. Error logs is not there anymore, but issue remain.
I check kubelet logs on host, there is not much and nothing related to an error
Tommorow I will try to setup sysdig to get some insight on system event (on worker host), but i'm not sure which process trigger the webhook and if it is a master process we are screwed...
This is not a defect in Istio, rather a notice of how many Kubernetes bleeding edge tech Istio is using :) This is a problem where EKS does not implement either validating or mutating webhooks.
Cheers -steve
@sdake
This is a problem where EKS does not implement either validating or mutating webhooks
Hook are triggered else the pod would be created (disabling the istio-sidecar-injector hook make pods appears, obviously without istio proxy)
Hum.... Apparently, there is no call to kube dns to get the injector ip, it seems no network call to the injector service.
I really not a mutating hook expert, Can someone validate the fact that it is the master that run them (or not...) ?
In this situation, master need to interact with the virtual cluster network, so it means routing between master and nodes. Can it be that ??
The MutatingWebhook admission controller plugin is compiled into the kube-apiserver. The admission controller invokes the sidecar injector webhook before the pod is created. There must be routing between the apiserver (master) and the nodes running the injector pod.
@ayj so is this an issue on the Istio side or the the AWS EKS side? The marriage of Istio v1 release coming just after the GA of EKS was looking like the perfect storm for both platforms.
Istio can be used on EKS with reduced Istio capabilities. Turn off galley and manual injection required. With galley enabled and automatic injection enabled, EKS does not function properly with Istio.
That is how we get around the current issue, but is this an issue on the Istio side or the EKS side?
@garysu EKS does not implement nor support mutating or validating webhooks. Blaming one side or another isn't helpful as it depends on perspective :)
Cheers -steve
Wasn’t blaming 😊, but now I know to go have the conversation with the EKS team and see where this on their radar to support (or not as the case may be).
Gary.
From: Steven Dake notifications@github.com Sent: Tuesday, July 10, 2018 8:10 AM To: istio/old_issues_repo old_issues_repo@noreply.github.com Cc: Gary Sumner garysu@seventh-symbol.net; Mention mention@noreply.github.com Subject: Re: [istio/old_issues_repo] Admission control webhooks (e.g. sidecar injector) don't work on EKS (#271)
@garysuhttps://github.com/garysu EKS does not implement nor support mutating or validating webhooks. Blaming one side or another isn't helpful as it depends on perspective :)
Cheers -steve
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/istio/old_issues_repo/issues/271#issuecomment-403857801, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ABzuQ5-OwIL6XJJR_bvU2L9d1buT25ceks5uFMO7gaJpZM4TIeWP.
I've posted this to Amazon's EKS forum: https://forums.aws.amazon.com/thread.jspa?threadID=285696
I got this on my v1.9.5 cluster, with istio 0.8 release:
Internal error occurred: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject: EOF
I guess that because curl https://istio-sidecar-injector.istio-system.svc:443/inject
not works, but curl https://istio-sidecar-injector.istio-system:443/inject
do. Am I right?
Just that we are on the same page I am not using EKS, instead I am running cluster with Kops cause region in not supported by EKS yet. K8S API has configuration mentioned in Istio documentation with respect to webhooks:
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority
K8S version is 1.9.6 and Istio version is 0.8
I am of the opinion that this is not only EKS issue
@milosradovanovic I agree there are possibly two vectors:
@FrostyLeaf, istio-sidecar
, istio-sidecar.istio-system
, and istio-sidecar.istio-system.svc
are all valid DNS names for the sidecar injector service depending on the source context. See https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services. If you're not running on EKS, can you open a new issue and include all of the debug information that's been requested elsewhere in this issue? e.g. mutatingwebhookconfiguration, injector pod log, api server logs, etc.
@milosradovanovic, can you open a new issue and include your kops configuration, along with the mutatingwebhookconfiguration, sidecar logs, and api-server logs? It would also be good to check your proxy settings to ensure the api-server can reach the in-cluster injector pod (see https://istio.io/help/troubleshooting/#automatic-sidecar-injection-will-fail-if-the-kube-apiserver-has-proxy-settings).
@ayj Maybe <svc-name>.<namespace>.svc
type record works in earlier version. But I found nothing about this in the newest kubernetes docs. I'm not working on istio for now, but I can prove this with some other service:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
curl-545bbf5f9c-tnpbc 1/1 Running 1 8d
voting-clam-chartmuseum-7946f7cf68-zpsts 1/1 Running 1 8d
$ kuberctl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
voting-clam-chartmuseum ClusterIP 10.102.148.26 <none> 8080/TCP 40d
$ kubectl exec curl-545bbf5f9c-tnpbc -- curl -k -s voting-clam-chartmuseum:8080/index.yaml
apiVersion: v1
entries: {}
generated: "2018-07-09T18:41:45Z"
$ kubectl exec curl-545bbf5f9c-tnpbc -- curl -k -s voting-clam-chartmuseum.default:8080/index.yaml
apiVersion: v1
entries: {}
generated: "2018-07-09T18:41:45Z"
$ kubectl exec curl-545bbf5f9c-tnpbc -- curl -k -s voting-clam-chartmuseum.default.svc:8080/index.yaml
command terminated with exit code 6
$ kubectl exec curl-545bbf5f9c-tnpbc -- curl -k -s voting-clam-chartmuseum.default.svc.cluster.local:8080/index.yaml
apiVersion: v1
entries: {}
generated: "2018-07-09T18:41:45Z"
@ayj You're right. Thanks a lot. I'll try to read more.
@garysu did you get an update on the support of mutating and validating webhooks from the EKS team?
Yes…they understand the need and are working to make a solution available, but there is not a committed timeline yet.
My guess is that they will release something by the end of the year as getting other regions up with EKS is a higher priority right now.
From: Arne Klein notifications@github.com Sent: Thursday, August 23, 2018 5:10 AM To: istio/old_issues_repo old_issues_repo@noreply.github.com Cc: Gary Sumner garysu@seventh-symbol.net; Mention mention@noreply.github.com Subject: Re: [istio/old_issues_repo] Admission control webhooks (e.g. sidecar injector) don't work on EKS (#271)
@garysuhttps://github.com/garysu did you get an update on the support of mutating and validating webhooks from the EKS team?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/istio/old_issues_repo/issues/271#issuecomment-415392147, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ABzuQzyQoZ6naUUzrajJo5CpAARzefcbks5uTpuEgaJpZM4TIeWP.
master1 kube-controller-manager: I0405 21:40:29.377013 1538 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"sleep-6bc9d848fc", UID:"322f0a8a-38d5-11e8-aad2-005056846055", APIVersion:"extensions", ResourceVersion:"2633", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)