ertis-research / opentwins

Innovative open-source platform that specializes in developing next-gen compositional digital twins
https://ertis-research.github.io/opentwins/
Apache License 2.0
151 stars 28 forks source link

Errors in installing as per the steps mentioned in README #2

Closed casafurix closed 11 months ago

casafurix commented 1 year ago

Hello,

I actually have tried installing Eclipse Ditto and Hono using the cloud2edge package, but there are many issues, in just the installation part itself, using either minikube or MicroK8s.

Are there any changes to be made in the steps? Thanks

juliarobles commented 1 year ago

Hello,

I don't know what kind of issues you might have, but in my experience usually the package crashes when a persistent volume with enough capacity for the mongodb pod has not been deployed. Also mongodb needs a persistent volume claim pointing to that persistent volume. Check that both are related and available before deploying the package. It may also be that the persistent volume is not being associated correctly with the pod because the kubernetes tags don't match. Likewise, you can check the github for the cloud2edge package or try another version to see if it works.

On the other hand, if you are having problems installing the cloud2edge package you can try installing Eclipse Ditto and Eclipse Hono separately. It really works the same as using the package (this is really just to make it easier to install together and establish a sample connection). You can try the default configuration to see if it works (without setting values).

To install ditto: helm upgrade --install --dependency-update -n digitaltwins dt-ditto eclipse-iot/ditto --version=3.1.4 --wait --debug

To install hono: helm upgrade --install dt-hono eclipse-iot/hono -n digitaltwins --version=2.3.1

If you still have issues, please check the corresponding repository or provide more information.

Regards :)

casafurix commented 1 year ago

Hello @juliarobles, thanks for your help. The cloud2edge package has gone through some updates, I have managed to install it now after following the recent updates to the package using the documentation of cloud2edge.

However, I am facing issues in deploying InfluxDB:

"For InfluxDB, Helm will again be used for deployment. The following sc-influxdb2.yaml and pv-influxdb2.yaml files will be required to be applied before installation. In addition, the recommended values are in the values-influxdb2.yaml file (it is recommended that you check it before installing and change the password variable to your preference)."

Similarly the Grafana files are unavailable:

"Deploying Grafana is very similar to InfluxDB. We will have to apply the file pv-grafana.yaml and install the Helm Chart with the values of the values-grafana.yaml file (it is also recommended to modify the password variable)."

The files you have mentioned as in this case, are not available anymore. Could you please help me out with obtaining the missing files, there are more in the README of this repository. Thanks.

juliarobles commented 1 year ago

I have just added the missing files, sorry for the delay. If you have any more problems, don't hesitate to comment.

casafurix commented 1 year ago

Hello, thank you so much for adding the missing files!

However, I am facing an issue in Connecting Eclipse Hono and Eclipse Ditto:

"The first thing to do is to check the IPs and ports to use with kubectl get services -n $NS. At this point we are interested in the dt-service-device-registry-ext and dt-ditto-nginx services, which correspond to Eclipse Hono and Eclipse Ditto respectively (if you have followed these instructions and services are NodePort, you will have to use port 3XXXX).

We will then create a Hono tenant called, for example, ditto (you must override the variable HONO_TENANT if you have chosen another name).

HONO_TENANT=ditto
curl -i -X POST http://$HONO_IP:$HONO_PORT/v1/tenants/$HONO_TENANT

" I have been following these steps, and I will show you the output of each:

agnibha@leo:~/OpenTwins/files_for_manual_deploy$ kubectl get services -n $NS
NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                         AGE
c2e-adapter-amqp                  LoadBalancer   10.111.60.223    10.111.60.223    5671:30744/TCP                  8d
c2e-adapter-http                  LoadBalancer   10.102.69.201    10.102.69.201    8443:30754/TCP                  8d
c2e-adapter-mqtt                  LoadBalancer   10.96.208.159    10.96.208.159    8883:32673/TCP                  8d
c2e-ditto-gateway                 ClusterIP      10.111.4.65      <none>           8080/TCP                        8d
c2e-ditto-nginx                   NodePort       10.111.205.70    <none>           8080:31176/TCP                  8d
c2e-ditto-swaggerui               ClusterIP      10.102.90.41     <none>           8080/TCP                        8d
c2e-kafka                         ClusterIP      10.111.86.151    <none>           9092/TCP,9094/TCP               8d
c2e-kafka-0-external              LoadBalancer   10.99.253.219    10.99.253.219    9094:32094/TCP                  8d
c2e-kafka-headless                ClusterIP      None             <none>           9092/TCP,9093/TCP               8d
c2e-service-auth                  ClusterIP      10.107.17.252    <none>           5671/TCP,8088/TCP               8d
c2e-service-command-router        ClusterIP      10.100.62.61     <none>           5671/TCP                        8d
c2e-service-device-registry       ClusterIP      10.101.245.40    <none>           5671/TCP,8080/TCP,8443/TCP      8d
c2e-service-device-registry-ext   LoadBalancer   10.101.124.207   10.101.124.207   28443:30353/TCP                 8d
c2e-zookeeper                     ClusterIP      10.99.47.252     <none>           2181/TCP,2888/TCP,3888/TCP      8d
c2e-zookeeper-headless            ClusterIP      None             <none>           2181/TCP,2888/TCP,3888/TCP      8d
ditto-mongodb                     ClusterIP      10.101.181.121   <none>           27017/TCP                       8d
grafana                           NodePort       10.99.204.168    <none>           80:32023/TCP                    68m
influxdb-influxdb2                NodePort       10.99.72.243     <none>           80:30237/TCP                    70m
kafka-cluster                     LoadBalancer   10.108.24.129    10.108.24.129    9094:30635/TCP,9092:31267/TCP   5d23h
kafka-manager                     NodePort       10.101.238.227   <none>           9000:31077/TCP                  5d23h
zookeeper                         ClusterIP      10.106.14.50     <none>           2181/TCP                        5d23h

So according to this my HONO_IP should be 10.101.124.207 (correct me if I am wrong), and HONO_PORT should be 28443, right? But this doesn't exactly match to what's written in your documentation, that "(if you have followed these instructions and services are NodePort, you will have to use port 3XXXX)", which means it should start with 3, which isn't happening in my case, I have followed the NodePort installation.

Unfortunately, when I run the commands, this is my output:

agnibha@leo:~/OpenTwins/files_for_manual_deploy$ HONO_TENANT=ditto
agnibha@leo:~/OpenTwins/files_for_manual_deploy$ HONO_IP=10.101.124.207
agnibha@leo:~/OpenTwins/files_for_manual_deploy$ HONO_PORT=28443
agnibha@leo:~/OpenTwins/files_for_manual_deploy$ curl -i -X POST http://$HONO_IP:$HONO_PORT/v1/tenants/$HONO_TENANT
curl: (7) Failed to connect to 10.101.124.207 port 28443: No route to host

I am actually new to all of this, so is there some place where I am making a mistake? Thanks a lot for your help. (Just a short note, I am actually trying to build an Open Source DT system for a wind turbine application, for a research project)

casafurix commented 1 year ago

Hello, thank you so much for adding the missing files!

However, I am facing an issue in Connecting Eclipse Hono and Eclipse Ditto:

"The first thing to do is to check the IPs and ports to use with kubectl get services -n $NS. At this point we are interested in the dt-service-device-registry-ext and dt-ditto-nginx services, which correspond to Eclipse Hono and Eclipse Ditto respectively (if you have followed these instructions and services are NodePort, you will have to use port 3XXXX).

We will then create a Hono tenant called, for example, ditto (you must override the variable HONO_TENANT if you have chosen another name).

HONO_TENANT=ditto
curl -i -X POST http://$HONO_IP:$HONO_PORT/v1/tenants/$HONO_TENANT

" I have been following these steps, and I will show you the output of each:

agnibha@leo:~/OpenTwins/files_for_manual_deploy$ kubectl get services -n $NS
NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                         AGE
c2e-adapter-amqp                  LoadBalancer   10.111.60.223    10.111.60.223    5671:30744/TCP                  8d
c2e-adapter-http                  LoadBalancer   10.102.69.201    10.102.69.201    8443:30754/TCP                  8d
c2e-adapter-mqtt                  LoadBalancer   10.96.208.159    10.96.208.159    8883:32673/TCP                  8d
c2e-ditto-gateway                 ClusterIP      10.111.4.65      <none>           8080/TCP                        8d
c2e-ditto-nginx                   NodePort       10.111.205.70    <none>           8080:31176/TCP                  8d
c2e-ditto-swaggerui               ClusterIP      10.102.90.41     <none>           8080/TCP                        8d
c2e-kafka                         ClusterIP      10.111.86.151    <none>           9092/TCP,9094/TCP               8d
c2e-kafka-0-external              LoadBalancer   10.99.253.219    10.99.253.219    9094:32094/TCP                  8d
c2e-kafka-headless                ClusterIP      None             <none>           9092/TCP,9093/TCP               8d
c2e-service-auth                  ClusterIP      10.107.17.252    <none>           5671/TCP,8088/TCP               8d
c2e-service-command-router        ClusterIP      10.100.62.61     <none>           5671/TCP                        8d
c2e-service-device-registry       ClusterIP      10.101.245.40    <none>           5671/TCP,8080/TCP,8443/TCP      8d
c2e-service-device-registry-ext   LoadBalancer   10.101.124.207   10.101.124.207   28443:30353/TCP                 8d
c2e-zookeeper                     ClusterIP      10.99.47.252     <none>           2181/TCP,2888/TCP,3888/TCP      8d
c2e-zookeeper-headless            ClusterIP      None             <none>           2181/TCP,2888/TCP,3888/TCP      8d
ditto-mongodb                     ClusterIP      10.101.181.121   <none>           27017/TCP                       8d
grafana                           NodePort       10.99.204.168    <none>           80:32023/TCP                    68m
influxdb-influxdb2                NodePort       10.99.72.243     <none>           80:30237/TCP                    70m
kafka-cluster                     LoadBalancer   10.108.24.129    10.108.24.129    9094:30635/TCP,9092:31267/TCP   5d23h
kafka-manager                     NodePort       10.101.238.227   <none>           9000:31077/TCP                  5d23h
zookeeper                         ClusterIP      10.106.14.50     <none>           2181/TCP                        5d23h

So according to this my HONO_IP should be 10.101.124.207 (correct me if I am wrong), and HONO_PORT should be 28443, right? But this doesn't exactly match to what's written in your documentation, that "(if you have followed these instructions and services are NodePort, you will have to use port 3XXXX)", which means it should start with 3, which isn't happening in my case, I have followed the NodePort installation.

Unfortunately, when I run the commands, this is my output:

agnibha@leo:~/OpenTwins/files_for_manual_deploy$ HONO_TENANT=ditto
agnibha@leo:~/OpenTwins/files_for_manual_deploy$ HONO_IP=10.101.124.207
agnibha@leo:~/OpenTwins/files_for_manual_deploy$ HONO_PORT=28443
agnibha@leo:~/OpenTwins/files_for_manual_deploy$ curl -i -X POST http://$HONO_IP:$HONO_PORT/v1/tenants/$HONO_TENANT
curl: (7) Failed to connect to 10.101.124.207 port 28443: No route to host

I am actually new to all of this, so is there some place where I am making a mistake? Thanks a lot for your help. (Just a short note, I am actually trying to build an Open Source DT system for a wind turbine application, for a research project)

Actually I hadn't run the minikube tunnel command before, it has worked for me now! Thanks anyway!

casafurix commented 1 year ago

Hello! I am actually facing an error in CMAK, not sure exactly as to how to fix it (I have installed zookeper as well, but is there something to be done in its installation which should be taken care of? Thanks in advance):

Yikes! Ask timed out on [ActorSelection[Anchor(akka://kafka-manager-system/), Path(/user/kafka-manager)]] after [5000 ms]. Message of type [kafka.manager.model.ActorModel$KMAddCluster]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply. Try again.

Is there a fix for this?

casafurix commented 1 year ago

Deploying RabbitMQ

For its deployment we will use Helm as in most technologies and, therefore, the sc-rabbitmq.yaml, pv-rabbitmq.yaml, pvc-rabbitmq.yaml and values-rabbitmq.yaml files will be needed.

@juliarobles , these files are missing too, for RabbitMQ deployment, could you please add them as well? Thanks a lot

juliarobles commented 1 year ago

I just added the files to deploy RabbitMQ. Sorry for taking so long, I've been very busy the last few weeks.

On the other hand, the installation of CMAK is not really necessary for the use of the platform. In fact, our idea is to replace that tool with a newer one. Any Kafka manager can do the same for you. We are using this manager for another project and it is quite good.

If you need anything, let us know, I will try to answer you as soon as possible.

Regards :)

casafurix commented 1 year ago

Hi! Thanks for your reply @juliarobles! I will check out the new Kafka manager, thank you for suggesting it.

I think there is one more missing file from this section: "You also need to store in variables the IPs and ports of both Kafka and InfluxDB, as well as the name of the Kafka topic. These variables will be INFLUX_IP, INFLUX_PORT, KAFKA_IP, KAFKA_PORT and KAFKA_TOPIC. Once all variables are ready, Telegraf can be displayed with the values defined in the values-telegraf.yaml file."

The values-telegraf.yaml file

juliarobles commented 1 year ago

Hi, sorry for not answering sooner, we have had a problem with that file and we have had to change several things. I have already added it and updated the README with a new command to use it. Also, I have modified the command that connects Eclipse Ditto with Kafka to send the events happened in the twins.

Regards

yasharth97 commented 11 months ago

Hello! I am actually facing an error in CMAK, not sure exactly as to how to fix it (I have installed zookeper as well, but is there something to be done in its installation which should be taken care of? Thanks in advance):

Yikes! Ask timed out on [ActorSelection[Anchor(akka://kafka-manager-system/), Path(/user/kafka-manager)]] after [5000 ms]. Message of type [kafka.manager.model.ActorModel$KMAddCluster]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply. Try again.

Is there a fix for this?

Hi @casafurix, were you able to resolve this issue? Even I am facing the same issue. I have tried and checked with different IP address values for "cluster zookeeper host" in the CMAK UI, when adding a new cluster. I also tried re-mapping in the '/etc/hosts' file, the 'zookeper-1' and 'zookeeper-2' hostname urls with endpoints of the 'zookeeper' pod using 'kubectl get endpoints zookeeper -n $NS'. In my case, it was '10.244.0.25'. This caused the kafka-manager logs to finally show that a socket connection had been established at that url.

However, am still receiving the same error as you in CMAK when creating the new cluster. Is there some configuration that I might be missing?

I would be grateful for any ideas or pointers on what might have worked for you.

@juliarobles could you also provide any viewpoint on this issue? I am currently trying to reproduce this amazing project separately in both windows (10 Home) and linux (ubuntu 22.04) and using minikube for my local laptop kubernetes cluster (with cpus=max and memory=15gb settings) and have started 'minikube tunnel'. I also tried the new kafka-ui project as suggested by you in one of the previous posts, but faced similar issues when creating the cluster. It seems that the address of the zookeeper hosts is not able to be resolved. I can share more logs and info if required.

Thanks in advance.

yasharth97 commented 11 months ago

Hello! I am actually facing an error in CMAK, not sure exactly as to how to fix it (I have installed zookeper as well, but is there something to be done in its installation which should be taken care of? Thanks in advance): Yikes! Ask timed out on [ActorSelection[Anchor(akka://kafka-manager-system/), Path(/user/kafka-manager)]] after [5000 ms]. Message of type [kafka.manager.model.ActorModel$KMAddCluster]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply. Try again. Is there a fix for this?

Hi @casafurix, were you able to resolve this issue? Even I am facing the same issue. I have tried and checked with different IP address values for "cluster zookeeper host" in the CMAK UI, when adding a new cluster. I also tried re-mapping in the '/etc/hosts' file, the 'zookeper-1' and 'zookeeper-2' hostname urls with endpoints of the 'zookeeper' pod using 'kubectl get endpoints zookeeper -n $NS'. In my case, it was '10.244.0.25'. This caused the kafka-manager logs to finally show that a socket connection had been established at that url.

However, am still receiving the same error as you in CMAK when creating the new cluster. Is there some configuration that I might be missing?

I would be grateful for any ideas or pointers on what might have worked for you.

@juliarobles could you also provide any viewpoint on this issue? I am currently trying to reproduce this amazing project separately in both windows (10 Home) and linux (ubuntu 22.04) and using minikube for my local laptop kubernetes cluster (with cpus=max and memory=15gb settings) and have started 'minikube tunnel'. I also tried the new kafka-ui project as suggested by you in one of the previous posts, but faced similar issues when creating the cluster. It seems that the address of the zookeeper hosts is not able to be resolved. I can share more logs and info if required.

Thanks in advance.

The issue mentioned above was solved using "Kafka-UI" project as mentioned in the above suggestions. Created a new deployment for kafka-ui, connected to the configured kafka cluster, and by default, it would create a new cluster as defined in the deployment yaml (referenced below) spec in 'KAFKA_CLUSTERS_0_NAME'.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-ui-deployment
  labels:
    app: kafka-ui
  # namespace: mstore
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-ui
  template:
    metadata:
      labels:
        app: kafka-ui
    spec:
      containers:
      - name: kafka-ui
        image: provectuslabs/kafka-ui:latest
        env:
        - name: KAFKA_CLUSTERS_0_NAME
          value: "digital_twin_local_cluster"
        - name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
          value: kafka-cluster:9092
        - name: KAFKA_CLUSTERS_0_ZOOKEEPER
          value: zookeeper:2181
        imagePullPolicy: Always
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "1024Mi"
            cpu: "1000m"
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-ui-service
spec:
  selector:
    app: kafka-ui
  type: NodePort
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
      nodePort: 31006

then apply the file during the kafka deployment setup stage, as mentioned in the README.

kubectl apply -f deploy-svc-kafka-ui.yaml -n $NS

This gives an url on which the kafka-ui can be accessed in a web browser at the link: 'http://:. Will look something like "http://192.168.XX.XX:3XXXX"

The topic can then consequently be defined using the kafka-ui web dashboard.

juliarobles commented 11 months ago

Hi @yasharth97, I'm glad you were able to fix it and thank you very much for sharing the solution. Right now we are working on improving the platform installation and then CMAK will be replaced by the Kafka-UI project. Hopefully we will be able to update the README of the platform soon.

Regards :)