Open CommoDor64 opened 1 year ago
Sorry for the late reply. This might be better in a discussions chat but i will share some thoughts here. The way the deployment happens now is intentional due to its flexibility. This does not mean there are not issues that prove to be annoying in certain situations. Most of the issues come from script issues from what i am aware of. Do you have your solution uploaded on your github that we can test? How easy would it be to offer newer releases in this setup? The nginx solution is not needed when using the dms_cli tool.
I understand the flexibility coming with just providing a bash scripts and instructions. However - the cloud community converges slowly around standard solution.
Terraform plugin exist for almost any cloud platform, also on premise. Ansible abstracts the need for a manual labor here.
Warning, this solution was quick and dirty, and was meant to allow people to install all semi-automatic. It uses Ansible (playbook). But I had ideas for a full Terraform setup with machine provisioning
P.S
I don't understand why a cli
is needed for a such a standard thing? The dms_cli
needs to be maintained, versioned, etc. And there are no super complex configurations in the end of the day
- hosts: 167.2**.***.***
vars:
ansible_user: root
tasks:
- name: clone oaic repo
git:
repo: "https://github.com/openaicellular/oaic.git"
dest: /home/root/oaic
- name: recusrive submodule pull
args:
chdir: /home/root/oaic
shell: git submodule update --init --recursive --remote
- name: prepare cluster installation
args:
chdir: /home/root/oaic/RIC-Deployment/tools/k8s/bin
shell: ./gen-cloud-init.sh
- name: install cluster
args:
chdir: /home/root/oaic/RIC-Deployment/tools/k8s/bin
shell: sudo ./k8s-1node-cloud-init-k_1_16-h_2_17-d_cur.sh
- name: check cluster installation
shell: sudo kubectl get pods -A
- name: create ric-infra namespace
shell: sudo kubectl create ns ricinfra
- name: install nfs provisioner
shell: sudo helm install stable/nfs-server-provisioner --namespace ricinfra --name nfs-release-1
- name: patch storage class
shell: 'sudo kubectl patch storageclass nfs -p "{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}"'
- name: install nfs common
shell: sudo apt install --yes nfs-common
- name: run local docker registry
shell: sudo docker run -d -p 5001:5000 --restart=always --name ric registry:2
- name: build e2 termination image
args:
chdir: /home/root/oaic/ric-plt-e2/RIC-E2-TERMINATION
shell: sudo docker build -f Dockerfile -t localhost:5001/ric-plt-e2:5.5.0 .
- name: push to local registry
args:
chdir: /home/root/oaic/ric-plt-e2/RIC-E2-TERMINATION
shell: sudo docker push localhost:5001/ric-plt-e2:5.5.0
- name: finish
args:
chdir: /home/root/oaic/RIC-Deployment/bin
shell: sudo ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe_oran_e_release_modified_e2.yaml
Hey all, thanks a lot for putting effort into this testbed setup, I would like to to start a discussion regarding developer experience.
What's wrong?
Lot's of manual processes can be automated using proper scripts or proper server config and provision systems (Ansible / Terraform). I had colleagues struggling with setting up the testbed. Despite using cloud methods, "Cloud Native" mindset is not present and the setup is much messier than is should be imho.
Solution
What do I propose?