kubernetes / dashboard

General-purpose web UI for Kubernetes clusters
Apache License 2.0
14.42k stars 4.16k forks source link

Get ServiceUnavailable 503 #971

Closed sigrlami closed 8 years ago

sigrlami commented 8 years ago

I followed this guide on configuring Kubernetes on Fedora master-node, everything goes ok, but when I go for UI, I had nothing and couldn't install as described in this guide, so I created

    {
      "kind": "Namespace",
      "apiVersion": "v1",
      "metadata": {
        "name": "kube-system"
      }
    }

and install as

 kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml --namespace=kube-system

which installed fine, but now when I go for

    https://<my-ip>:8080/ui

I get

    {
      "paths": [
        "/api",
        "/api/v1",
        "/apis",
        "/apis/autoscaling",
        "/apis/autoscaling/v1",
        "/apis/batch",
        "/apis/batch/v1",
        "/apis/extensions",
        "/apis/extensions/v1beta1",
        "/healthz",
        "/healthz/ping",
        "/logs/",
        "/metrics",
        "/resetMetrics",
        "/swaggerapi/",
        "/version"
      ]
    }

and trying to use

    http://<my-ip>:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

for access and get

    {
      "kind": "Status",
      "apiVersion": "v1",
      "metadata": {},
      "status": "Failure",
      "message": "no endpoints available for service \"kubernetes-dashboard\"",
      "reason": "ServiceUnavailable",
      "code": 503
    }

and don't know how to fix it. Any suggestions?

Confirmation that plugin installed

    root@fed-master ~]# kubectl get deployment kubernetes-dashboard --namespace=kube-system
    NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    kubernetes-dashboard   1         0         0            0           2h
    [root@fed-master ~]# kubectl get svc kubernetes-dashboard --namespace=kube-system
    NAME                   CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
    kubernetes-dashboard   10.254.154.193   nodes         80/TCP    2h

crossposted to SO: http://stackoverflow.com/questions/38083876/kubernetes-ui-unreachable

flaviotorres commented 8 years ago

the same here...

$ kubectl describe service kubernetes-dashboard --namespace=kube-system
Name:           kubernetes-dashboard
Namespace:      kube-system
Labels:         app=kubernetes-dashboard
Selector:       app=kubernetes-dashboard
Type:           NodePort
IP:         10.64.184.28
Port:           <unset> 80/TCP
NodePort:       <unset> 30020/TCP
Endpoints:      <none>
Session Affinity:   None
No events.

$ kubectl --namespace=kube-system get ep kubernetes-dashboard
NAME                   ENDPOINTS   AGE
kubernetes-dashboard   <none>      23m
sigrlami commented 8 years ago

@flaviotorres I'm total newbie for Kubernetes, your commands on my master give following output

[root@fed-master ~]# kubectl describe service kubernetes-dashboard --namespace=kube-system
Name:           kubernetes-dashboard
Namespace:      kube-system
Labels:         app=kubernetes-dashboard
Selector:       app=kubernetes-dashboard
Type:           NodePort
IP:             10.254.154.193
Port:           <unset> 80/TCP
NodePort:       <unset> 32239/TCP
Endpoints:      <none>
Session Affinity:   None
No events.
[root@fed-master ~]# kubectl --namespace=kube-system get ep kubernetes-dashboard
NAME                   ENDPOINTS   AGE
kubernetes-dashboard   <none>      5h

which is really frustrating, because I followed tutorials

bryk commented 8 years ago

Hmm... How do you set up your cluster? Do you have any other services running on it? Like heapster? Can you show logs of the dashboard pod? (command like kubectl logs kubernetes-dashboard-3717423461-iq5aq --namespace kube-system)

I'm asking this because Dashboard works well on clusters that are correctly configured and have service accounts. When somebody starts a cluster manually, there's often something wrong.

sigrlami commented 8 years ago

@bryk I started clean DigitalOcean VM with fedora for only Kubernetes and following guide because I'm totally new into ecosystem

[root@fed-master ~]# kubectl logs kubernetes-dashboard --namespace kube-system
Error from server: pods "kubernetes-dashboard" not found

but I see it as deployment

root@fed-master ~]# kubectl get deployments --all-namespaces 
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 
kube-system kubernetes-dashboard 1 0 0 0 6h

I think guide is outdated but @roberthbailey helped me out in SO chat but couldn't get far

bryk commented 8 years ago

@cheld Recently described a nice debugging instruction in: https://github.com/kubernetes/dashboard/issues/916#issuecomment-228724010

Basically, please check whether your master is reachable within the cluster. This is create a pod that has bash and/or curl. Exec into it and try to curl the master. If it cannot curl the master, then the UI cannot either.

bryk commented 8 years ago

And the logs? Type kubectl get pods --namespace kube-system and then kubectl logs <the_name> --namespace kube-system

sigrlami commented 8 years ago
kubectl get pods --namespace kube-system

doesn't return anything

and as I don't have pod name, I couldn't exec to it

bryk commented 8 years ago

and as I don't have pod name, I couldn't exec to it

See @cheld's instruction. You can kubectl run test --image gcr.io/google_containers/hyperkube-amd64:v1.3.0-beta.1 sleep 100000 and exec into it like kubectl exec test-541238630-res2e -it bash

bryk commented 8 years ago

How about kubectl get rs --all-namespaces and kubectl get pod --all-namespaces?

sigrlami commented 8 years ago
[root@fed-master ~]# kubectl get rs --all-namespaces
NAMESPACE     NAME                              DESIRED   CURRENT   AGE
default       test-983855151                    1         0         5m
kube-system   kubernetes-dashboard-1775839595   1         0         16h

and

[root@fed-master ~]# kubectl get pod --all-namespaces
[root@fed-master ~]# 

gives nothing

sigrlami commented 8 years ago

so, the this is then I run kubectl run test --image .. it creates deployment but not pod, above output after I created test.

Is my configuration wrong?

bryk commented 8 years ago

So it seems that something is really broken with your cluster. Can you maybe show all events of your cluster (kubectl get events --all-namespaces)? Or just use a hosted offering of Kubernetes, like GKE: https://cloud.google.com/container-engine/. Hosted clusters are managed by cloud providers and correctly configured. It is way easier to use them :)

sigrlami commented 8 years ago
[root@fed-master ~]# kubectl get events --all-namespaces
NAMESPACE     FIRSTSEEN   LASTSEEN   COUNT     NAME                              KIND                    SUBOBJECT   TYPE      REASON              SOURCE                      MESSAGE
default       10m         9s         23        test-983855151                    ReplicaSet                          Warning   FailedCreate        {replicaset-controller }    Error creating: pods "test-983855151-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
default       10m         10m        1         test                              Deployment                          Normal    ScalingReplicaSet   {deployment-controller }    Scaled up replica set test-983855151 to 1
kube-system   12h         9s         1507      kubernetes-dashboard-1775839595   ReplicaSet                          Warning   FailedCreate        {replicaset-controller }    Error creating: pods "kubernetes-dashboard-1775839595-" is forbidden: no API token found for service account kube-system/default, retry after the token is automatically created and added to the service account
kube-system   12h         10s        1507      kubernetes-dashboard-v1.0.1       ReplicationController               Warning   FailedCreate        {replication-controller }   Error creating: pods "kubernetes-dashboard-v1.0.1-" is forbidden: no API token found for service account kube-system/default, retry after the token is automatically created and added to the service account

That's interesting.

Yeah I know I could use hosted version

sigrlami commented 8 years ago

But my point is here that docs should be updated in official guide, for those who are doing this. My initial goal is to test on VM and then use my own hardware, because I have standalone server.

so where I can read about API tokens?

bryk commented 8 years ago

Can you file an issue on the kubernetes/kubernetes repo and explain your issues there then? I know this problem is a huge pain, but I'm no expert in this fedora installation and don't know how to test/debug this.

cc @kubernetes/docs

sigrlami commented 8 years ago

ok, thanks, I think you can close this now

bryk commented 8 years ago

One more thing. If you want to try out Kubernetes locally we have scripts in this repo for doing this. They generally work for all folks. Just run this script to spin up a locak K8s cluster: https://github.com/kubernetes/dashboard/blob/master/build/hyperkube.sh

sigrlami commented 8 years ago

After looking around, I did following steps

  1. comment fully line KUBE_ADMISSION_CONTROL from /etc/kubernetes/apiserver
  2. systemctl daemon-reload;
  3. systemctl restart kubelet
  4. systemctl restart kube-apiserver

Which removed API error

sigrlami commented 8 years ago
root@fed-master ~]# kubectl get events --namespace=kube-system
FIRSTSEEN   LASTSEEN   COUNT     NAME                                    KIND      SUBOBJECT                               TYPE      REASON                   SOURCE                 MESSAGE
16m         16m        1         kubernetes-dashboard-1775839595-5emhu   Pod                                               Normal    Scheduled                {default-scheduler }   Successfully assigned kubernetes-dashboard-1775839595-5emhu to fed-node
11m         11m        1         kubernetes-dashboard-1775839595-5emhu   Pod                                               Normal    NodeControllerEviction   {controllermanager }   Marking for deletion Pod kubernetes-dashboard-1775839595-5emhu from Node fed-node
11m         11m        3         kubernetes-dashboard-1775839595-vko33   Pod                                               Warning   FailedScheduling         {default-scheduler }   no nodes available to schedule pods
11m         11m        1         kubernetes-dashboard-1775839595-vko33   Pod                                               Warning   FailedScheduling         {default-scheduler }   no nodes available to schedule pods
11m         11m        1         kubernetes-dashboard-1775839595-vko33   Pod                                               Normal    Scheduled                {default-scheduler }   Successfully assigned kubernetes-dashboard-1775839595-vko33 to 127.0.0.1

despite Successfully assigned kubernetes-dashboard-1775839595-5emhu to fed-node I still can't access ui

bryk commented 8 years ago

This is a progress. Can you now get the logs from the pod?

sigrlami commented 8 years ago

Looping with CrashLoopBackOff error

1m        1m        1         kubernetes-dashboard-1775839595-vko33   Pod       spec.containers{kubernetes-dashboard}   Normal    Created      {kubelet 127.0.0.1}   Created container with docker id 6133724cf58a
1m        1m        1         kubernetes-dashboard-1775839595-vko33   Pod       spec.containers{kubernetes-dashboard}   Normal    Started      {kubelet 127.0.0.1}   Started container with docker id 6133724cf58a
1m        30s       8         kubernetes-dashboard-1775839595-vko33   Pod                                               Warning   FailedSync   {kubelet 127.0.0.1}   Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-1775839595-vko33_kube-system(6750f407-3de6-11e6-a67a-040117906401)"

but test is running fine

4m          4m         1         test-983855151-jo7im   Pod          spec.containers{test}   Normal    Pulled                  {kubelet 127.0.0.1}        Container image "gcr.io/google_containers/hyperkube-amd64:v1.3.0-beta.1" already present on machine
4m          4m         1         test-983855151-jo7im   Pod          spec.containers{test}   Normal    Created                 {kubelet 127.0.0.1}        Created container with docker id 216a92468a08
4m          4m         1         test-983855151-jo7im   Pod          spec.containers{test}   Normal    Started                 {kubelet 127.0.0.1}        Started container with docker id 216a92468a08
4m          4m         1         test-983855151         ReplicaSet                           Normal    SuccessfulCreate        {replicaset-controller }   Created pod: test-983855151-jo7im
4m          4m         1         test                   Deployment                           Normal    ScalingReplicaSet       {deployment-controller }   Scaled up replica set test-983855151 to 1

and only 1 pod

[root@fed-master ~]# kubectl get pods
NAME                   READY     STATUS    RESTARTS   AGE
test-983855151-jo7im   1/1       Running   0          4m

curl test

[root@fed-master ~]# kubectl exec test-983855151-jo7im -- curl -k -u admin:admin https://10.0.0.1:443
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0^C

from #916 I have the same problem, how can I " rebuild kuber-cluster " ? cast @arhided

bryk commented 8 years ago

@cheld Can you help?

cheld commented 8 years ago

The IP in the last curl is not correct. The fedora guide is using non-default subnet :

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

Please get the service ip from the service 'kubernetes' (kubectl get svc) and try again. The correct IP for master is most likely 10.254.0.1

sigrlami commented 8 years ago

@cheld you're correct , running from master

[root@fed-master ~]# kubectl get svc
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.254.0.1   <none>        443/TCP   2h
[root@fed-master ~]# kubectl exec test-983855151-jo7im -- curl -k -u admin:admin https://10.254.0.1:443
Error from server: dial tcp 127.0.0.1:10250: getsockopt: connection refused
sigrlami commented 8 years ago

And I was able to run tests, after restarting again

[root@fed-master ~]# kubectl get svc
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.254.0.1   <none>        443/TCP   2h

[root@fed-master ~]# kubectl get pods
NAME                   READY     STATUS    RESTARTS   AGE
test-983855151-lv9ps   1/1       Running   0          31s

[root@fed-master ~]# kubectl exec test-983855151-lv9ps -- curl -k -u admin:admin https://10.254.0.1:443
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0^C

@cheld same story for 10.254.0.1, but my node is actually another VM with different global IP

cheld commented 8 years ago

Ok, can you try the following.

  1. Curl on your master IP from eth0
  2. Curl on your master with virtual service IP (10.254....)
  3. Repeat on minion
bryk commented 8 years ago

Closing as stale.

kenzhaoyihui commented 7 years ago

Hi,i have the same problem,could you help me ? kubernetes version:v1.2.0 Linux: Fedora 23 x64

1.Kube-ui pod is running [root@dhcp-8-235 kubernetconfigfile]# kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE kube-ui-v1-qykq0 1/1 Running 0 2h

2.kube-ui ep is ok [root@dhcp-8-235 kubernetconfigfile]# kubectl get endpoints --namespace=kube-system NAME ENDPOINTS AGE kube-ui 172.17.0.10:8080 2h

3.But the problem occur: [root@dhcp-8-235 kubernetconfigfile]# kubectl get deployment kube-ui --namespace=kube-system Error from server: deployments.extensions "kube-ui" not found

[root@dhcp-8-235 kubernetconfigfile]# kubectl get deployment kubernetes-dashboard --namespace=kube-system Error from server: deployments.extensions "kubernetes-dashboard" not found

when input "http://master:8080/ui",it display: { "paths": [ "/api", "/api/v1", "/apis", "/apis/autoscaling", "/apis/autoscaling/v1", "/apis/batch", "/apis/batch/v1", "/apis/extensions", "/apis/extensions/v1beta1", "/healthz", "/healthz/ping", "/logs/", "/metrics", "/resetMetrics", "/swaggerapi/", "/version" ] }

Thanks, Ken

fedya commented 7 years ago

same issue

kenzhaoyihui commented 7 years ago

@fedya You can try the latest version kubernetes v1.5.1 , kubeadm is a good tool to install kubernetes.

Also, you can docker pull the image about kubernetes-dashboard firstly, then create the kubernetes-dashboard pod by kubectl. Just my thought.

firelyu commented 7 years ago

Same issue. When I check the container logs, get the followings.

# docker ps -a | grep kubernetes-dashboard-amd64:v1.4.1
082652d3b602        index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.4.1   "/dashboard --port=90"   4 minutes ago        Exited (1) 4 minutes ago

# docker logs 082652d3b602
Using HTTP port: 9090
Creating API server client for http://localhost:8080
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get http://localhost:8080/version: dial tcp 127.0.0.1:8080: getsockopt: connection refused

I find I mis-configure the --apiserver-host in kubernetes-dashboard.yaml, and I have resolved the problem. When I set --apiserver-host=http://hostname:8080, the dashboard error out cluster is misconfigured. After I set --apiserver-host=http://xxx.xxx.xxx.xxx:8080, the dashboard work fine.

# vi kubernetes-dashboard.yaml
spec:
      containers:
      - name: kubernetes-dashboard
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
        ...
        args:
           - --apiserver-host=http://115.28.130.30:8080
        ...
vhosakot commented 7 years ago

I see this issue too with kubernetes 1.5.4 and kubernetes-dashboard image version gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0.

I installed kubeadm referring https://kubernetes.io/docs/getting-started-guides/kubeadm/, and then installed kubernetes-dashboard by doing

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.0/src/deploy/kubernetes-dashboard.yaml

I see the kubernetes-dashboard in CrashLoopBackOff status and the k8s_kubernetes-dashboard.* container on the worker is in Exited state.

Below are the errors. Has anyone successfully installed kubernetes-dashboard on kubeadm?

# kubectl --namespace=kube-system get all
NAME                                                          READY     STATUS             RESTARTS   AGE
po/calico-policy-controller-mqsmh                             1/1       Running            0          4h
po/canal-etcd-tm2rv                                           1/1       Running            0          4h
po/canal-node-3nv2t                                           3/3       Running            0          4h
po/canal-node-5fckh                                           3/3       Running            1          4h
po/canal-node-6zgq8                                           3/3       Running            0          4h
po/canal-node-rtjl8                                           3/3       Running            0          4h
po/dummy-2088944543-09w8n                                     1/1       Running            0          4h
po/etcd-vhosakot-kolla-kube1.localdomain                      1/1       Running            0          4h
po/kube-apiserver-vhosakot-kolla-kube1.localdomain            1/1       Running            2          4h
po/kube-controller-manager-vhosakot-kolla-kube1.localdomain   1/1       Running            0          4h
po/kube-discovery-1769846148-pftx5                            1/1       Running            0          4h
po/kube-dns-2924299975-9m2cp                                  4/4       Running            0          4h
po/kube-proxy-0ndsb                                           1/1       Running            0          4h
po/kube-proxy-h7qrd                                           1/1       Running            1          4h
po/kube-proxy-k6168                                           1/1       Running            0          4h
po/kube-proxy-lhn0k                                           1/1       Running            0          4h
po/kube-scheduler-vhosakot-kolla-kube1.localdomain            1/1       Running            0          4h
po/kubernetes-dashboard-3203962772-mw26t                      0/1       CrashLoopBackOff   11         41m
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
svc/canal-etcd             10.96.232.136    <none>        6666/TCP        4h
svc/kube-dns               10.96.0.10       <none>        53/UDP,53/TCP   4h
svc/kubernetes-dashboard   10.100.254.77    <nodes>       80:30085/TCP    41m
NAME                   DESIRED   SUCCESSFUL   AGE
jobs/configure-canal   1         1            4h
NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/kube-discovery         1         1         1            1           4h
deploy/kube-dns               1         1         1            1           4h
deploy/kubernetes-dashboard   1         1         1            0           41m
NAME                                 DESIRED   CURRENT   READY     AGE
rs/calico-policy-controller          1         1         1         4h
rs/dummy-2088944543                  1         1         1         4h
rs/kube-discovery-1769846148         1         1         1         4h
rs/kube-dns-2924299975               1         1         1         4h
rs/kubernetes-dashboard-3203962772   1         1         0         41m

# kubectl --namespace=kube-system describe pod kubernetes-dashboard-3203962772-mw26t
  20m    5s    89    {kubelet vhosakot-kolla-kube2.localdomain}                        Warning    FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203962772-mw26t_kube-system(67b0d69b-0b47-11e7-8c97-7a2ed4192438)"

# kubectl --namespace=kube-system logs kubernetes-dashboard-3203962772-mw26t
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

# docker ps -a | grep -i dash
3c33cf43d5e4        gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0   "/dashboard --port=90"   54 seconds ago      Exited (1) 22 seconds ago                       k8s_kubernetes-dashboard.9eb4d80e_kubernetes-dashboard-3203962772-mw26t_kube-system_67b0d69b-0b47-11e7-8c97-7a2ed4192438_93520bd4

# docker logs k8s_kubernetes-dashboard.9eb4d80e_kubernetes-dashboard-3203962772-mw26t_kube-system_67b0d69b-0b47-11e7-8c97-7a2ed4192438_93520bd4
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
aamol commented 7 years ago

I am also facing the same issue. Was it resolved ? Few logs [root@localhost ~]# kubectl get pods No resources found. [root@localhost ~]# kubectl get pods --all-namespaces No resources found. [root@localhost ~]# kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.254.0.1 <none> 443/TCP 2h [root@localhost ~]# kubectl get pods --all-namespaces [root@localhost ~]# kubectl get pods --all-namespaces No resources found. [root@localhost ~]# kubectl get svc --all-namespaces NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes 10.254.0.1 <none> 443/TCP 2h kube-system kubernetes-dashboard 10.254.25.191 <none> 80/TCP 1h

[root@localhost ~]# kubectl get events --all-namespaces kube-system 10m 1h 24 kubernetes-dashboard-3333321904 ReplicaSet Warning FailedCreate {replicaset-controller } Error creating: No API token found for service account "kubernetes-dashboard", retry after the token is automatically created and added to the service account

Please let me know if I am missing something

Victolor commented 1 year ago

I had the same Issue, I followed a guide for a master and control node, installed Flannel and installed the dashboard. While trying to access it I got a ServiceUnavailable error.

While checking the Firewall logging I saw that port 8472/udp got blocked states. when I looked up this port I found the following documentation. https://github.com/coreos/coreos-kubernetes/blob/master/Documentation/kubernetes-networking.md

So in my case it was a firewall issue. I had to open the following ports on my Master node: 443 TCP, 8285 UDP, 8472 UDP.

I hope this helps.