smart-edge-open / edgeapps

Applications that can be onboarded to an Intel® Smart Edge Open edge node.
Apache License 2.0
51 stars 74 forks source link

EMCO: Smart City Deployment on OpenNESS(21.03.01) no sensors are visible on GUI #56

Closed praj527 closed 3 years ago

praj527 commented 3 years ago

Hi, I deploy Smart City Application using EMCO following the below link: https://github.com/open-ness/specs/blob/master/doc/building-blocks/emco/openness-emco.md

I was able to deploy it successfully but On the UI I was unable to see camera and sensors. All the pods are running fine and no error log in them but still unable to see the sensors on GUI.

I am using three openness cluster for deployment their details are below. Please let me know if any other details are required.

  1. EMCO Cluster: -> flavor: central_orchestrator (openness 21.03.01)

    ->  rest all the configuration is default while deploying the cluster
    
        ->  pods:
            [root@edgeemco2 centos]# kubectl get pods --all-namespaces
            NAMESPACE     NAME                                               READY   STATUS    RESTARTS   AGE
            emco          emco-db-emco-mongo-0                               1/1     Running   195        13d
            emco          emco-etcd-0                                        1/1     Running   85         13d
            emco          emco-services-clm-784bf5c464-4wkqd                 1/1     Running   0          13d
            emco          emco-services-dcm-6774df749f-qzchm                 1/1     Running   0          13d
            emco          emco-services-dtc-6b55485c7d-p22rm                 1/1     Running   0          13d
            emco          emco-services-gac-7dd7d8b759-7lsvj                 1/1     Running   0          13d
            emco          emco-services-ncm-5c6886c5c8-5kr5s                 1/1     Running   0          13d
            emco          emco-services-orchestrator-bbfc74f47-dj8vf         1/1     Running   0          13d
            emco          emco-services-ovnaction-556bf75d8c-sntv7           1/1     Running   0          13d
            emco          emco-services-rsync-7848fd4568-ktrjv               1/1     Running   0          13d
            emco          emco-tools-fluentd-0                               1/1     Running   0          13d
            emco          emco-tools-fluentd-mkv29                           1/1     Running   147        13d
            harbor        harbor-app-harbor-chartmuseum-77cfffbd4d-82v5b     1/1     Running   2          14d
            harbor        harbor-app-harbor-clair-779df4555b-xlbrd           2/2     Running   936        14d
            harbor        harbor-app-harbor-core-6849c6fdf8-hj5xm            1/1     Running   324        14d
            harbor        harbor-app-harbor-database-0                       1/1     Running   201        14d
            harbor        harbor-app-harbor-jobservice-5d9bddcf6b-wk9pd      1/1     Running   7          14d
        harbor        harbor-app-harbor-nginx-5fd87c9477-fxc79           1/1     Running   8          14d
    harbor        harbor-app-harbor-notary-server-fc9dd596b-cjvv9    1/1     Running   3          14d
    harbor        harbor-app-harbor-notary-signer-6bdd4c5784-24chg   1/1     Running   3          14d
    harbor        harbor-app-harbor-portal-fd5ff4bc9-72lx4           1/1     Running   2          14d
    harbor        harbor-app-harbor-redis-0                          1/1     Running   1          14d
    harbor        harbor-app-harbor-registry-55b7966fc4-tt7pf        2/2     Running   2          14d
    harbor        harbor-app-harbor-trivy-0                          1/1     Running   1          14d
    kube-system   calico-kube-controllers-66956989f4-k2lc8           1/1     Running   1          14d
    kube-system   calico-node-n47fr                                  1/1     Running   372        14d
    kube-system   coredns-74ff55c5b-c6gk9                            1/1     Running   1          14d
    kube-system   coredns-74ff55c5b-ctt4v                            1/1     Running   1          14d
    kube-system   etcd-edgecontroller2                               1/1     Running   115        14d
    kube-system   kube-apiserver-edgecontroller2                     1/1     Running   370        14d
    kube-system   kube-controller-manager-edgecontroller2            1/1     Running   413        14d
    kube-system   kube-proxy-wqqll                                   1/1     Running   1          14d
    kube-system   kube-scheduler-edgecontroller2                     1/1     Running   382        14d
  2. Cloud Cluster: -> flavor: minimal (openness 21.03.01)

        ->  rest all the configuration is default while deploying the cluster
    
        ->  pods:
            [root@edgecloud centos]# kubectl get pods
            NAME                              READY   STATUS             RESTARTS   AGE
            cloud-db-785ff8cd79-mhlps         1/1     Running            0          19h
            cloud-storage-ccf8d85d5-t5zsc     1/1     Running            0          19h
            cloud-web-9cd779448-p44hv         1/1     Running            0          19h
            recycler-for-grafana-volume       0/1     Completed          0          11d
  3. Edge Cluster: -> flavor: minimal (openness 21.03.01)

        ->  rest all the configuration is default while deploying the cluster
    
        ->  pods:
            [root@edgenode centos]# kubectl get pods
            NAME                                                 READY   STATUS    RESTARTS   AGE
            traffic-office1-alert-f678f6497-bmzmg                1/1     Running   0          20h
            traffic-office1-analytics-traffic-6dc68dd65c-2fdj2   1/1     Running   0          20h
            traffic-office1-camera-discovery-598f4f74-hxqv9      1/1     Running   0          20h
            traffic-office1-cameras-6f85bfbb74-xhj2q             1/1     Running   0          20h
            traffic-office1-db-6df4fc8c99-v2wbx                  1/1     Running   0          20h
            traffic-office1-db-init-546697b598-q4lj7             1/1     Running   0          20h
            traffic-office1-mqtt-6467bc5779-dc25k                1/1     Running   0          20h
            traffic-office1-mqtt2db-6dc97c885f-5s6hj             1/1     Running   0          20h
            traffic-office1-smart-upload-5d54d58fd-npm6n         1/1     Running   0          20h
            traffic-office1-storage-f9c8c6c95-pnj4t              1/1     Running   0          20h

Screenshot for GUI: smctgui

wushigax commented 3 years ago

hi, @praj527 , can you show the SmartCity pod logs ? Which command do you use to execute script set_env.sh?

praj527 commented 3 years ago

Hi, @Wushigang915 Command we used for env setup "./setup_env.sh -e -d -c -r" -> " ./setup_env.sh -e 192.168.0.17 -d 192.168.0.9 -c 192.168.0.13 -r"

I could not see any pod with name SmartCity, I am attaching all the pod log for your reference

Cloud Cluster pod logs

1.cloud-web-9cd779448-p44hv cloud-web-9cd779448-p44hv_pod.log 2.cloud-storage-ccf8d85d5-t5zsc cloud-storage-ccf8d85d5-t5zsc_pod.log 3.cloud-db-785ff8cd79-mhlps cloud-db-785ff8cd79-mhlps_pod.log

Edge Cluster pod logs

1.traffic-office1-storage-f9c8c6c95-pnj4t traffic-office1-storage-f9c8c6c95-pnj4t_pod.log 2.traffic-office1-analytics-traffic-6dc68dd65c-2fdj2 traffic-office1-analytics-traffic-6dc68dd65c-2fdj2_pod.log 3.traffic-office1-camera-discovery-598f4f74-hxqv9 traffic-office1-camera-discovery-598f4f74-hxqv9_pod.log

  1. traffic-office1-cameras-6f85bfbb74-xhj2q traffic-office1-cameras-6f85bfbb74-xhj2q_pod.log 5.traffic-office1-db-6df4fc8c99-v2wbx traffic-office1-db-6df4fc8c99-v2wbx_pod.log 6.traffic-office1-db-init-546697b598-q4lj7 traffic-office1-db-init-546697b598-q4lj7_pod.log 7.traffic-office1-mqtt-6467bc5779-dc25k traffic-office1-mqtt-6467bc5779-dc25k_pod.log 8.traffic-office1-mqtt2db-6dc97c885f-5s6hj traffic-office1-mqtt2db-6dc97c885f-5s6hj_pod.log 9.traffic-office1-smart-upload-5d54d58fd-npm6n traffic-office1-smart-upload-5d54d58fd-npm6n_pod.log 10.traffic-office1-storage-f9c8c6c95-pnj4t traffic-office1-storage-f9c8c6c95-pnj4t_pod.log
wushigax commented 3 years ago

@praj527 , This seems to be a communication problem between the edge-db pod and cloud-db pod, Can you do two tests? 1) Enter db-init pod to ping the db pod clusterIP on the edge cluster. 2) Enter db pod on the edge cluster to ping the cloud controller IP .

praj527 commented 3 years ago

Hi @Wushigang915, Sorry for delay in reply, actually I redeployed the complete setup but ended up with the same issue.

I tried your suggested test one test failed 'db-init pod to ping the db pod clusterIP on the edge cluster' and another passed below are the test results. Please let me know if any more details are required.

1. Enter db-init pod to ping the db pod clusterIP on the edge cluster.

[root@edgenode2 centos]# kubectl get svc
NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                             AGE
kubernetes                        ClusterIP   10.96.0.1        <none>        443/TCP                                             14d
loungex-app-service               NodePort    10.97.196.15     <none>        8082:30082/TCP                                      2d
traffic-office1-cameras-service   ClusterIP   10.111.122.199   <none>        17000/TCP,17010/TCP,17020/TCP,17030/TCP,17040/TCP   119m
traffic-office1-db-service        ClusterIP   10.98.64.116     <none>        9200/TCP,9300/TCP                                   119m
traffic-office1-mqtt-service      ClusterIP   10.99.98.107     <none>        1883/TCP                                            119m
traffic-office1-storage-service   ClusterIP   10.106.229.76    <none>        8080/TCP                                            119m
[root@edgenode2 centos]# kubectl get pods -o wide
NAME                                                READY   STATUS    RESTARTS   AGE    IP              NODE        NOMINATED NODE   READINESS GATES
traffic-office1-alert-5d694cc859-mjlww              1/1     Running   0          126m   10.245.74.244   edgenode2   <none>           <none>
traffic-office1-analytics-traffic-df8dcfc79-m4twk   1/1     Running   0          126m   10.245.74.246   edgenode2   <none>           <none>
traffic-office1-camera-discovery-68958b48bd-z2fqb   1/1     Running   0          126m   10.245.74.226   edgenode2   <none>           <none>
traffic-office1-cameras-557fc4cf9b-s6r4l            1/1     Running   0          126m   10.245.74.229   edgenode2   <none>           <none>
traffic-office1-db-5646f569fc-tvvbq                 1/1     Running   0          125m   10.245.74.223   edgenode2   <none>           <none>
traffic-office1-db-init-6d6dff5476-ps76p            1/1     Running   0          125m   10.245.74.231   edgenode2   <none>           <none>
traffic-office1-mqtt-d8cb9dd9-6l5sb                 1/1     Running   0          126m   10.245.74.221   edgenode2   <none>           <none>
traffic-office1-mqtt2db-7cbb5db4d9-qlxd5            1/1     Running   0          125m   10.245.74.209   edgenode2   <none>           <none>
traffic-office1-smart-upload-5fd656fb76-l4r9z       1/1     Running   0          125m   10.245.74.216   edgenode2   <none>           <none>
traffic-office1-storage-5b99445df-gnchs             1/1     Running   0          125m   10.245.74.239   edgenode2   <none>           <none>

Pinging from 'db-init' to 'db' IP

[root@traffic-office1-db-init-6d6dff5476-ps76p /]# ping 10.245.74.223
PING 10.245.74.223 (10.245.74.223) 56(84) bytes of data.
^C
--- 10.245.74.223 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6004ms

Pinging from 'db-init' to 'db' CLUSTER-IP

[root@traffic-office1-db-init-6d6dff5476-ps76p /]# ping 10.98.64.116
PING 10.98.64.116 (10.98.64.116) 56(84) bytes of data.
^C
--- 10.98.64.116 ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 4999ms

2. Enter db pod on the edge cluster to ping the cloud controller IP .

Pinging from db to cloud cluster controller IP

[elasticsearch@traffic-office1-db-5646f569fc-tvvbq home]$ ping 192.168.0.13
PING 192.168.0.13 (192.168.0.13) 56(84) bytes of data.
64 bytes from 192.168.0.13: icmp_seq=1 ttl=63 time=1.72 ms
64 bytes from 192.168.0.13: icmp_seq=2 ttl=63 time=0.842 ms
64 bytes from 192.168.0.13: icmp_seq=3 ttl=63 time=3.29 ms
64 bytes from 192.168.0.13: icmp_seq=4 ttl=63 time=1.50 ms
64 bytes from 192.168.0.13: icmp_seq=5 ttl=63 time=0.842 ms
64 bytes from 192.168.0.13: icmp_seq=6 ttl=63 time=1.04 ms
^C
--- 192.168.0.13 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5008ms
rtt min/avg/max/mdev = 0.842/1.541/3.296/0.851 ms
wushigax commented 3 years ago

hi @praj527 , Sorry for delay in reply, the db-init pod should be able to ping db pod, the pod on the default namespace should be able to communicate with each other. Can you check the netpol on the cluster: "kubectl get netpol", if the netpol "block-all-ingress" on the cluster, please delete it and restart SmartCity.

praj527 commented 3 years ago

Thanks @Wushigang915, It worked for me. There were no such "block-all-ingress" was present. But there are some other policies for other services when I removed them it worked.