smart-edge-open / converged-edge-experience-kits

Source code for experience kits with Ansible-based deployment.
Apache License 2.0
37 stars 40 forks source link

nfd-master node not in running state when setup is run on VMs #23

Closed pavanats closed 4 years ago

pavanats commented 4 years ago

Hi, I am new to Openness and trying to create an edge controller & network edge setup using VMs. I have created another VM to run the deployment script. My problems are:

  1. Can this setup be run using CentOS7.6 VMs?
  2. On the controller node, I can't login from landing_ui page. I have read somewhere, that this login page is applicable only for on-premise deployment. Is this correct.
  3. On the controller node, I ran kubectl describer pods nfd-master -n openness and the output is: "Failed to create pod sandbox.... networkPlugin cni failed to setup pod nfd-master.... /run/openvswitch/kube-ovn-daemon.sock: connect: no such file or directory

I am badly stuck with the above error, any help will be appreciated.

amr-mokhtar commented 4 years ago

Hi, I am new to Openness and trying to create an edge controller & network edge setup using VMs. I have created another VM to run the deployment script. My problems are:

  1. Can this setup be run using CentOS7.6 VMs?

OpenNESS currently support running on bare-metal. Running in VMs has not been tested by OpenNESS team.

  1. On the controller node, I can't login from landing_ui page. I have read somewhere, that this login page is applicable only for on-premise deployment. Is this correct.

Correct. This is applicable only for on-prem. For NE deployment, you have the option to deploy the Kubernetes dashboard

  1. On the controller node, I ran kubectl describer pods nfd-master -n openness and the output is: "Failed to create pod sandbox.... networkPlugin cni failed to setup pod nfd-master.... /run/openvswitch/kube-ovn-daemon.sock: connect: no such file or directory

It is not clear why this could have happened, but there is a general observation that kube-ovn is slow to deploy. If this is persistent, please report a separate issue for it and provide the following:

  1. print out of all the pods in the cluster
  2. logs of the any failing pods
  3. the Ansible configuration files

Please note that we can only support bare-metal installation at the moment.

I am badly stuck with the above error, any help will be appreciated.