openshift / origin

Conformance test suite for OpenShift
http://www.openshift.org
Apache License 2.0
8.48k stars 4.7k forks source link

connection refused when getting external traffic into cluster with externalIPs #15753

Closed yu2003w closed 6 years ago

yu2003w commented 7 years ago

I setup openshift cluster and deployed Redis successfully and expose redis service to external clients using externalIPs. When I connected exposed redis service outside of cluster, I got error as "connection refused".

Version

oc v1.5.1+7b451fc kubernetes v1.5.2+43a9be4 features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://host-10-1-241-54:8443 openshift v1.5.1+7b451fc kubernetes v1.5.2+43a9be4

Steps To Reproduce
  1. deploy redis with following configuration apiVersion: v1 kind: ReplicationController metadata: name: sb-2017-redis-master spec: replicas: 1 selector: sb-2017-redis-master: master template: metadata: labels: sb-2017-redis-master: master sb-2017-redis-sentinel-svc: sentinel servicebroker: sb-2017-redis name: sb-2017-redisbt spec: containers:
    • name: master image: df_redis:3.2.9 runAsUser: root imagePullPolicy: IfNotPresent env:
      • name: CLUSTER_NAME value: cluster-sb-2017-redis
      • name: MASTER value: "true"
      • name: REDIS_PASSWORD value: pass1234 ports:
      • containerPort: 6379 resources: limits: cpu: "0.1" volumeMounts:
      • mountPath: /redis-master-data name: data
    • name: sentinel image: df_redis:3.2.9 runAsUser: root imagePullPolicy: IfNotPresent env:
      • name: CLUSTER_NAME value: cluster-sb-2017-redis
      • name: SENTINEL value: "true"
      • name: SENTINEL_HOST value: sb-2017-redis
      • name: SENTINEL_PORT value: "26379"
      • name: REDIS_PASSWORD value: pass1234 ports:
      • containerPort: 26379 volumes:
    • name: data emptyDir: {} dnsPolicy: ClusterFirst restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30
  2. deploy service with externalIPs

    kind: Service apiVersion: v1 metadata: name: redis-svc spec: selector: sb-2017-redis-master: master ports:

    • name: redis-sen protocol: TCP port: 26379 targetPort: 26379
    • name: redis-master protocol: TCP port: 6379 targetPort: 6379 externalIPs:
    • 10.1.236.92
    • 10.1.236.93 Noted: 10.1.236.92/93 is the real ip addresses of the host nodes.
      Current Result

      Redis pod is running and could be accessed from other pods within cluster. On host nodes, expected ports are listened. [root@host-10-1-236-92 gluster]# netstat -an | grep 6379 tcp 0 0 10.1.236.92:6379 0.0.0.0: LISTEN
      tcp 0 0 10.1.236.92:26379 0.0.0.0:
      LISTEN

Svc is exposed successfully on host nodes. [root@host-10-1-236-92 gluster]# oc describe svc redis-svc Name: redis-svc Namespace: redis Labels: Selector: sb-2017-redis-master=master Type: ClusterIP IP: 172.30.38.194 External IPs: 10.1.236.92,10.1.236.93 Port: redis-sen 26379/TCP Endpoints: 172.30.53.2:26379 Port: redis-master 6379/TCP Endpoints: 172.30.53.2:6379 Session Affinity: None No events.

However, I failed to access redis service via redis client outside of cluster. jared@jared-ThinkPad-E550:~/redis/redis-3.2.9/src$ ./redis-cli -p 6379 -h 10.1.236.92 Could not connect to Redis at 10.1.236.92:6379: Connection refused Could not connect to Redis at 10.1.236.92:6379: Connection refused not connected>

Expected Result

Redis service could be accessed from outside.

Thanks for helping resolving this issue.

weliang1 commented 7 years ago

Could you try to use port number 26379 not 3679? ~/redis/redis-3.2.9/src$ ./redis-cli -p 26379 -h 10.1.236.92

yu2003w commented 7 years ago

The same issue for both 6379 and 26379. In my installation, flannel is used. When I switched to openswitch, the issue was resolved and externalIPs/nodePort worked well.

The cluster is setup on VMs provisioned by openstack. Could flannel be used in such environment?

Thx.

openshift-bot commented 6 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 6 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

openshift-bot commented 6 years ago

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close