Closed timhughes closed 2 years ago
/kind support
I'm running into the same issue. Is this possible with istio and minikube?
@mikala3 what error do u get ? have tried allocating more memory ? istio needs a lot of memory
Tried again on latest minikube
minikube version: v1.18.1
commit: 09ee84d530de4a92f00f1c5dbc34cead092b95bc
kubectl logs -n istio-system pod/prometheus-7767dfd55-9wg5k -c istio-proxy
Still getting the same errors
[Envoy (Epoch 0)] [2021-03-11 15:20:56.272][20][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:54] Unable to establish new stream
2021-03-11T15:20:57.812763Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-03-11T15:20:58.961102Z warn cache node:sidecar~10.244.3.2~prometheus-7767dfd55-9wg5k.istio-system~istio-system.svc.cluster.local-13 resource:default request:7bb64c9f-4125-45ba-bf24-0356cd717956 CSR failed with error: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istio-pilot.istio-system.svc on 10.96.0.10:53: read udp 10.244.3.2:38023->10.96.0.10:53: i/o timeout", retry in 6400 millisec
2021-03-11T15:20:58.961207Z error citadelclient Failed to create certificate: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istio-pilot.istio-system.svc on 10.96.0.10:53: read udp 10.244.3.2:38023->10.96.0.10:53: i/o timeout"
2021-03-11T15:20:58.961224Z error cache node:sidecar~10.244.3.2~prometheus-7767dfd55-9wg5k.istio-system~istio-system.svc.cluster.local-13 resource:default request:7bb64c9f-4125-45ba-bf24-0356cd717956 CSR retrial timed out: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istio-pilot.istio-system.svc on 10.96.0.10:53: read udp 10.244.3.2:38023->10.96.0.10:53: i/o timeout"
2021-03-11T15:20:58.961237Z error cache node:sidecar~10.244.3.2~prometheus-7767dfd55-9wg5k.istio-system~istio-system.svc.cluster.local-13 resource:default failed to generate secret for proxy: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istio-pilot.istio-system.svc on 10.96.0.10:53: read udp 10.244.3.2:38023->10.96.0.10:53: i/o timeout"
2021-03-11T15:20:58.961249Z error sds node:sidecar~10.244.3.2~prometheus-7767dfd55-9wg5k.istio-system~istio-system.svc.cluster.local-13 resource:default Close connection. Failed to get secret for proxy "sidecar~10.244.3.2~prometheus-7767dfd55-9wg5k.istio-system~istio-system.svc.cluster.local" from secret cache: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istio-pilot.istio-system.svc on 10.96.0.10:53: read udp 10.244.3.2:38023->10.96.0.10:53: i/o timeout"
2021-03-11T15:20:58.961300Z info sds node:sidecar~10.244.3.2~prometheus-7767dfd55-9wg5k.istio-system~istio-system.svc.cluster.local-13 resource:default connection is terminated: rpc error: code = Canceled desc = context canceled
[Envoy (Epoch 0)] [2021-03-11 15:20:58.961][20][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:91] gRPC config stream closed: 14, connection error: desc = "transport: Error while dialing dial tcp: lookup istio-pilot.istio-system.svc on 10.96.0.10:53: read udp 10.244.3.2:38023->10.96.0.10:53: i/o timeout"
2021-03-11T15:20:59.812614Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-03-11T15:21:01.812450Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
@timhughes hw much memory did you allocate to your Docker Desktop and minikube ?
I nticed u are using multi node --nodes=4 --cpus=4
is there a reason you are using 4 nodes ?
it is unlikely that you need multi node on local cluster, how much memory your system has ?
/triage needs-information /kind support
@timhughes hw much memory did you allocate to your Docker Desktop and minikube ?
I am on Linux so no need for Docker Desktop, My Local machine has 24 cores and 64GB ram and the disks are nvme rated at 3500MB/s so I am not worried about resources
I nticed u are using multi node --nodes=4 --cpus=4
is there a reason you are using 4 nodes ?
I am attempting to run rook-ceph which requires 3 worker nodes
it is unlikely that you need multi node on local cluster, how much memory your system has ?
@mikala3 what error do u get ? have tried allocating more memory ? istio needs a lot of memory
✔ Istio core installed
✔ Istiod installed
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the conditionressgateway
Deployment/istio-system/istio-eastwestgateway
Deployment/istio-system/istio-ingressgateway
- Pruning removed resources Error: failed to install manifests: errors occurred during operation
minikube start --profile=${CTX_CLUSTER1} --cpus=6 --memory=20g --driver=kvm2 --nodes=2 --addons storage-provisioner,default-storageclass,metallb
This may just be a DNS issue.
If I kill the coredns pod and then the istio pods, it appears to fix it.
As a workaround, this is fine for me but coming up with a way if it working first go would be better
This may just be a DNS issue.
If I kill the coredns pod and then the istio pods, it appears to fix it.
As a workaround, this is fine for me but coming up with a way if it working first go would be better
So you are able to run with istio and minikube ( with multiple nodes)?
Yes
On Thu, 22 Apr 2021, 17:10 mikala3, @.***> wrote:
This may just be a DNS issue.
If I kill the coredns pod and then the istio pods, it appears to fix it.
As a workaround, this is fine for me but coming up with a way if it working first go would be better
So you are able to run with istio and minikube ( with multiple nodes)?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/minikube/issues/10248#issuecomment-824978374, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJFZD77HPTJN7YBZNWY5J3TKBC7PANCNFSM4WQW24ZA .
@timhughes I wonder if the binary in this Pr helps ? here is the link to the binary from a PR that I think might fix this issue, do you mind trying it out ?
http://storage.googleapis.com/minikube-builds/11731/minikube-linux-amd64 http://storage.googleapis.com/minikube-builds/11731/minikube-darwin-amd64 http://storage.googleapis.com/minikube-builds/11731/minikube-windows-amd64.exe
this PR by @andriyDev is doing improvements on multinode
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Hi @timhughes, we haven't heard back from you, do you still have this issue? There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.
I will close this issue for now but feel free to reopen when you feel ready to provide more information.
When trying to run istio on a multi node minikube the
istio-proxy
containers do not start.Steps to reproduce the issue:
Full output of
minikube start
command used, if not already included: minikube-start-nodes4-cpus4-memory8g.log minikube-addons-enable-istio-provisioner.log minikube-addons-enable-istio.logFull output of
minikube logs
command: minikube-logs.logFull container logs from
istio-operator
andistio-system namespaces
:istio-operator-istio-operator-6dbfd4446f-xxqnn-1611512011468363254.log istio-system-istiod-6ccd677dc7-gmzsw-1611512018108421785.log istio-system-istio-ingressgateway-8577c95547-v25xk-1611512014880656996.log istio-system-prometheus-7767dfd55-kd9c8-1611512021807342019.log istio-system-prometheus-7767dfd55-kd9c8-1611512024337276220.log
Relevant Logs from
istio-proxy
container: