Closed higginse-id closed 4 years ago
Hi @higginse-id, I checked this with engineering. It seems that you are using the wrong deploy script.
You should use ./deploy_onprem.sh
.
The .dockerignore file exists at https://github.com/open-ness/openness-experience-kits/blob/master/roles/openness/onprem/dataplane/ovncni/common/files/.dockerignore
HI. I don't follow. I was attempting a Network Edge deployment - it makes no sense to use the on premise playbook for a network edge deployment.
Furthermore, as far as I know I have been following the deployment instructions exactly.
I have been trying to deploy the simplest possible configuration, with a single controller and single edge node.
Can you please clarify?
Hi,
We need to know that there are two deployment modes: OnPrem and Network Edge. Please take a look to: https://github.com/open-ness/specs/blob/master/doc/architecture.md#deployment-scenarios
From our understanding, there is a change made in group_vars/all.yml
# Dataplane to be used for On-Premises mode
# Available dataplanes:
# - nts
# - ovncni
onprem_dataplane: "ovncni"
Which is wrong. Those settings are designed to be used in OnPrem mode.
If we want to change CNI in Network Edge we need to look to kubernetes_cnis variable :
kubernetes_cnis:
- kubeovn
where kubeovn is enabled by default.
Could you send all your changes made in the repo?
Closing issue due to inactivity..
I have been attempting to deploy a minimal network edge deployment, using the openness experience kit. I have modified it to use ovncni rather than nts I have also disabled any customisations for the controller and edge nodes (e.g. don't want a real time kernel etc.). I just want to verify end-to-end connectivity from my core network through an edge node to an edge client behind it.
Edge Node deployment fails with the following error
Error building ovs-dpdk - code: None, message: COPY failed: stat /var/lib/docker/tmp/docker-builder687389694/ovs-healthcheck.sh: no such file or directory
In attempting to debug the ovs docker issue I have tried to build the docker image directly/manually on the target edge node (I find the ansible logs nearly impossible to read, let alone debug)
It also gave the same outcome:
From digging a little deeper it now seems that it’s probable that the .dockerignore file is misconfigured. As a crude workaround I manually modified it and added entries for the two files causing issues (the wildcard entry masks two required files):
with these changes, the manual build succeeds. Then to 'patch' the experience kit config temporarily: 1) On the openness experience kit host I modified roles/kubernetes/cni/kubeovn/common/defaults/main.yml to also expect a (local) .dockerignore file
2) Next, I manually modified the .dockerignore as previously mentioned to exclude ovs-healthcheck.sh and start-ovs-dpdk.sh 3) I (again manually) copied the modified .dockerignore file to my target edge host (at the path I chose above - /opt/openness/ehiggins/) 4) Finally I re-ran the deploy_ne.sh script with (nodes argument).
This time the script ran to completion.