Open gyliu513 opened 6 years ago
Output of curl -vv http://server.cluster2.global/helloworld
from client pod.
# curl -vv http://server.cluster2.global/helloworld
* Hostname was NOT found in DNS cache
* Trying 1.1.1.1...
* Connected to server.cluster2.global (1.1.1.1) port 80 (#0)
> GET /helloworld HTTP/1.1
> User-Agent: curl/7.35.0
> Host: server.cluster2.global
> Accept: */*
>
< HTTP/1.1 503 Service Unavailable
< content-length: 57
< content-type: text/plain
< date: Sun, 08 Jul 2018 15:34:32 GMT
* Server envoy is not blacklisted
< server: envoy
< x-envoy-upstream-service-time: 10
<
* Connection #0 to host server.cluster2.global left intact
upstream connect error or disconnect/reset before headers#
sorry for the delay. I will update the YAMLs today. There were some API changes
is this with 0.8 images?
Seems the ingress gateway conflict with my ingress controller, after delete the ingress controller in my cluster, it works fine.
Do you have any comments for what is wrong with my ingress controller?
root@gyliu-dev1:~/go/src/github.com/kubernetes-sigs/federation-v2# kubectl get svc -n kube-system | egrep "ingress|backend"
default-backend ClusterIP 100.0.0.128 <none> 80/TCP 18d
icp-management-ingress ClusterIP 100.0.0.87 <none> 8443/TCP 18d
@gyliu513 @rshriram I am trying the above with istio-1.1 branch and seeing the 503.
* Hostname was NOT found in DNS cache
* Trying 1.1.1.2...
* Connected to server.ns2.svc.cluster.global (1.1.1.2) port 80 (#0)
> GET /helloworld HTTP/1.1
> User-Agent: curl/7.35.0
> Host: server.ns2.svc.cluster.global
> Accept: */*
>
< HTTP/1.1 503 Service Unavailable
< content-length: 57
< content-type: text/plain
< date: Tue, 15 Jan 2019 00:20:15 GMT
* Server envoy is not blacklisted
< server: envoy
<
* Connection #0 to host server.ns2.svc.cluster.global left intact
upstream connect error or disconnect/reset before headers
I am observing the following: i) istio-proxy log on client pod shows that its connecting to 1.1.1.2:80 (and not the egress pod ip like @gyliu513 showed above) ii) The istio egress gateway doesn't have any logs (which makes sense based on i above)
Any pointers on what could be causing the istio-proxy to use the resolved IP (1.1.1.2 in this case) and not use the egress gateway?
Hi @rshriram , I run the multicluster again in a new kubernetes cluster and the case failed.
I can deploy all of the component well, and I can see all related resources are working well, both client and server can also be created and running well.
ca cluster
cluster1 which is named as cluster77 in my env.
cluster2 which is cluster121 in my env.
From cluster2 where server is running:
From cluster1 where client is running:
We can see that the coreDNS is working well in cluster1 and cluster2.
But when I run following command from client to server, the client istio-proxy report some error and the
curl
return 503.Curl server from client.
Check client istio-proxy log, it report the following error.
Please note
20.1.35.196
is my egressgateway pod IP in the cluster where client is running.The egressgateway pod report follows:
We can see that the egress gateway in cluster1 (cluster77 in my env) is trying to access the ingressgateway loadbalancer IP in cluster2 (cluster121 in my env).
But check the log of ingressgateway in cluster2 (cluster121 in my env), there is no incoming request from cluster1 (cluster77 in my env).