zikalino / openshift-work

This is temporary repo to do experiments with openshift
MIT License
1 stars 0 forks source link

Migrate nodePrep and masterPrep to playbooks #17

Open brusMX opened 6 years ago

brusMX commented 6 years ago

Let's try to do it all in Ansible Playbooks

brusMX commented 6 years ago

Ok, I have migrated the nodePrep in commit Id 6562cb68cf44db6d361fd605a52089ae63c6b542 but the problem now is that I need to be able to deploy ansible scripts into the VMs. Which basically would mean that I need:

  1. Solve Issue #11 so I can NAT and SSH into the master and so on, or
  2. Have the bastion node setup #16 and add all the other VMs to some groups so I can run the playbooks on them.
zikalino commented 6 years ago

About #11 I asked @haroldwongms and he said that the plan is to remove NAT on purpose for security reasons. Perhaps we should just enable NAT for bastion node?

Alternatively, bastion node could be in the same vnet as other nodes so we could just use local addresses without NAT.

Pls check this example:

This script is to create list of hosts dynamically: https://github.com/Azure-Samples/ansible-playbooks/blob/master/vmss/get-hosts-tasks.yml

And this example is using it:

https://github.com/Azure-Samples/ansible-playbooks/blob/master/vmss/vmss-setup-deploy.yml

zikalino commented 6 years ago

actually entire VMSS sample may be useful:

https://github.com/Azure-Samples/ansible-playbooks/tree/master/vmss

Maybe as a next step we could use scaleset(s) to create nodes.

zikalino commented 6 years ago

I have created issue #19 and suggested the idea that we could have a separate script to prepare images. Then later we could just refer these images in create.yml so additional preparation step and NAT wouldn't be necessary.

brusMX commented 6 years ago

The only concern I have with VMSS is that there was an issue on provisioning disks. You could not select a single VM to deploy one disk, it was all VMs have a disk, or none. I know the PG worked to alleviate this situation, but I don’t know if the work has been finished. Now, maybe if we setup a GlusterFS cluster on the side, then this problem would be solved and the disk provisioning would be handled by Gluster.