Closed aravind254 closed 1 year ago
So, it sounds like you are looking for a description something like Eric did here: https://github.com/nephio-project/one-summit-22-workshop/blob/main/nephio-workshop.svg
@henderiw I think we want, for R1, just the e2e environment to be a single VM with:
We may want more workload clusters eventually but this would be a good start. A few other requirements:
I think there is more to the networking setup than that (for example VLANs), @henderiw can you please refine this/make it more accurate?
/assign @henderiw
Few more questions about this
if you look at the sensible environment all of this is configurable. So I was thinking for mgmt test we only want mgmt cluster and for true E2E tests we create 4 clusters with the latest k8s version at this time.
So the ansible env. has the flexibility to handle these different case with some config input.
if you look at the sensible environment all of this is configurable. So I was thinking for mgmt test we only want mgmt cluster and for true E2E tests we create 4 clusters with the latest k8s version at this time.
So the ansible env. has the flexibility to handle these different case with some config input.
Hey @henderiw I was thinking to provision them with a multicluster KinD tool which can use a configuration file like this one. I can cover that on the next meeting.
My understanding is that R1 will do an E2E call. As such we need to provide interconnectivity to all cluster on dedicated networks.
I called the networks in the following way.
Region cluster deploys 1 SMF and 1 AMF and 2 workload clusters deploy UPF.
Here are the parameters that will deploy an E2E testbed.
all:
vars:
cloud_user:
I have created the following diagram to visualize the Test bed setup:
So the current HW requirements for the K8s clusters are:
Resource | Request | Limit |
---|---|---|
CPU | 6.6 | 13.7 |
Memory(GB) | 0.94 | 2.37 |
Recomendation: 8vCPUs, 6GB
How the Nephio workload tentatively looks is
Name | Latest release |
---|---|
ContainerD | 1.7.0 |
CNI | 1.1.2 |
Multus CNI | 3.9.3 |
According to the k8s-conformance program, most of the Kubernetes distributions have been certified with 1.24.12
version.
In our automation call today we discussed that we evolve our test infrastructure iteratively that could satisfy following scenarios in that order:
Please add/correct if I missed anything here. We will create these as individual issues to work on them iteratively.
I have created the following diagram to visualize the Test bed setup:
So the current HW requirements for the K8s clusters are:
Resource Request Limit CPU 6.6 13.7 Memory(GB) 0.94 2.37
Recomendation: 8vCPUs, 6GB
How the Nephio workload tentatively looks is
Name Latest release ContainerD 1.7.0 CNI 1.1.2 Multus CNI 3.9.3 According to the k8s-conformance program, most of the Kubernetes distributions have been certified with
1.24.12
version.
the CRDs in R1 will not allow for eth1 and eth2. ClusterContext have 1 master interface. We use vlans to distinguish the networks in R1 rather than a dedicated NIC. This is also more presentative to real world.
Is this complete? Can we close it?
Is this complete? Can we close it?
I was planning to update the diagram to reflect the latest comments of @henderiw . Regarding the SW and HW reqs, I think we're okay.
The diagram and topology needs to be enhanced for all free5gc components and end to end call scenario with ueransim.
The figure has to be updated, UERANSIM functions will be running within the VM context (assuming R1 is contained within a single VM setup), and these functions generally will be part of one of the Edge clusters as shown in this figure. But with the other UPF in edge-2 cluster then the N3 would be between the cluster, or we could have another UE/RANSIM as part of edge-2 as well. The decision is whether we go with 1 or 2 UE/gNodeB setup, or we could have one edge cluster have both the UPFs being served by the one or 2 gNodeB, with two DNNs.
This is the source of this diagram : - https://github.com/Orange-OpenSource/towards5gs-helm/blob/main/docs/demo/Setup-free5gc-on-multiple-clusters-and-test-with-UERANSIM.md
Just following up here. Have we captured all of the above somewhere in a repo? Once that is done I think this can be closed.
Just following up here. Have we captured all of the above somewhere in a repo? Once that is done I think this can be closed.
I was trying to capture the requirements in this doc. I'll update it with the last comments.
Ok - we should move it to GH, I think.
@electrocucaracha thanks. Looks like we need access to that doc.
@electrocucaracha thanks. Looks like we need access to that doc.
@aravind254 can you give access to @gvbalaji and @johnbelamaric
I have updated the diagram, but I'm not sure if UE has to be running as a container outside any cluster.
BTW, free5Gc has some Kernel restrictions (5.0.0-23-generic or 5.4.x.)
Looking at the diagram again, it looks OK, we might be complicating it with Xn. Was also checking if UERANSIM supports Xn, looks like it doesn't currently. Also if we include Xn, that should be a different overlay, technically may not be correct too. Probably we just center that "N2 (bridge net)" in the diagram, since it was to the left, looked like Xn was intended in there.
As I understand it, the requirements are now understood and documented in a gdoc. Eventually that should move to GH as we implement the test bed, but that activity would fall under that implementation phase in SIG Release. So I am closing this ticket.
SIG2 needs to clarify requirements for E2E test bed.
1) What are the number of clusters and what type of cluster (for example KIND) needs to be created? 2) What is the networking setup that is expected? 3) What are the workloads that needs to be setup in these clusters?