kubernetes-retired / contrib

[EOL] This is a place for various components in the Kubernetes ecosystem that aren't part of the Kubernetes core.
Apache License 2.0
2.46k stars 1.68k forks source link

Pending message for exposed externalApi #2984

Closed armanriazi closed 5 years ago

armanriazi commented 5 years ago

I don't know why kubernets show pendding result.

sudo kubectl get svc -n ingress-nginx -v=4

When I run this command I get this result:

no kind is registered for the type v1beta1.Table in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:29"

Name:ingress-nginx Type:LoadBalancer InternalIP:10.108.240.88 ExternalIP:pending

PORT(s):80:30191/TCP,443:30616/TCP 21h

Yaml file:

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  loadBalancerIP: 172.18.3.11
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https

  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

I use docker 18.06,kubernete 1.13 to propose test on private organization with exp ip range 172.18.3.9-20

Flannel Log: kubectl logs --namespace kube-system kube-flannel-ds-amd64-ms94w -c kube-flannel

Result:

Failed to list v1.Node: Get https://10.96.0.1:443/api/v1/nodes?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused E1211 11:48:43.238318 1 reflector.go:201] github.com/coreos/flannel/subnet/kube/kube.go:295: Failed to list v1.Node: Get https://10.96.0.1:443/api/v1/nodes?resourceVersion=0: net/http: TLS handshake timeout

Used kubeadm init:

kubeadm init --pod-network-cidr 10.255.0.0/16 --service-cidr 10.244.0.0/16 --service-dns-domain "k8s" --apiserver-advertise-address 172.18.3.9

Dashboard kubernete shows every thing(pods,ingress,replicateSets,private docker registery container) Ok except this service for exposing cafe.example.com/cafe and externalIP!

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

armanriazi commented 5 years ago

/close

k8s-ci-robot commented 5 years ago

@armanriazi: Closing this issue.

In response to [this](https://github.com/kubernetes/contrib/issues/2984#issuecomment-472330856): >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
armanriazi commented 5 years ago

/remove-lifecycle stale

armanriazi commented 5 years ago

@fejta-bot