The following QuickStart Examples are great to do an initial test of your IBM Cloud account access - for more robust environment automation skip down to the Workshops section.
~/.ssh/id_rsa{.pub}
- otherwise, modify vars/examples.yml
to reflect the path to your key pairexport IC_API_KEY=<YOUR_API_KEY_HERE>
export IC_REGION=<REGION_NAME_HERE>
ansible-galaxy install -r collections/requirements.yml
ansible-playbook example_list_vsi_images_and_profiles.yaml # to test access
ansible-playbook example_create_simple_vm_ssh.yaml # to test basic functions
ansible-playbook example_destroy_simple_vm_ssh.yaml # to undo test of basic functions
ws-kubernetes101
role that show how do deploy the DNS nodes, glue it together with an external zone hosted at AWS Route53 or DigitalOcean, and configure the other nodes to access it via split-horizon resolution.You'll need to add the passlib package to your Ansible Tower virtualenv - do the following as root:
# source /var/lib/awx/venv/ansible/bin/activate
# umask 0022
# pip install --upgrade passlib
# deactivate
The primary workloads Blue Forge supports are ones the deploy and configure workshop environments. These are the workshops this platform supports on the IBM Cloud:
With N being number of Students, this environment will make the following:
QTY | Asset | Hostname |
---|---|---|
1 | Proctor Bastion | bastion.[GUID].[DOMAIN] |
2 | DNS Nodes | ns[NUM]-[GUID].[DOMAIN] |
1 Per N | Bastion/Container Host | student[N].[GUID].[DOMAIN] |
extra_vars.yaml
file found in vars/example_workshop_containers101.yml
ansible-galaxy install -r collections/requirements.yml
cp vars/example_workshop_containers101.yml containers101.extra_vars.yml
## Edit the containers101.extra_vars.yml file
ansible-playbook -e "@containers101.extra_vars.yml" workshop_create_containers101.yaml
ansible-playbook -e "@containers101.extra_vars.yml" workshop_destroy_containers101.yaml
With N being number of Students, this environment will make the following:
QTY | Asset | Hostname, Additional A Records |
---|---|---|
1 | Proctor Bastion | bastion.[GUID].[DOMAIN] |
2 | DNS Nodes | ns[NUM]-[WORKSHOP_SHORTCODE]-[GUID].[DOMAIN] |
1 Per N | Load Balancer/Bastion | student[N].[GUID].[DOMAIN], *.apps.student[N].[GUID].[DOMAIN], api.student[N].[GUID].[DOMAIN] |
Y Per N | K8s Control Plane Nodes | student[N]-cp[Y].[GUID].[DOMAIN] |
Z Per N | K8s Application Nodes | student[N]-app[Z].[GUID].[DOMAIN] |
extra_vars.yaml
file found in vars/example_workshop_kubernetes101.yml
ansible-galaxy install -r collections/requirements.yml
cp vars/example_workshop_kubernetes101.yml kubernetes101.extra_vars.yml
## Edit the kubernetes101.extra_vars.yml file
ansible-playbook -e "@kubernetes101.extra_vars.yml" workshop_create_kubernetes101.yaml
ansible-playbook -e "@kubernetes101.extra_vars.yml" workshop_destroy_kubernetes101.yaml
With N being number of Students, this environment will make the following:
QTY | Asset | Hostname |
---|---|---|
1 | Proctor Bastion | bastion.[GUID].[DOMAIN] |
2 | DNS Nodes | ns[NUM]-[GUID].[DOMAIN] |
1 Per N | Ansible Tower Host | student[N]-tower.[GUID].[DOMAIN] |
X Per N | Ansible Target Node | student[N]-node[X].[GUID].[DOMAIN] |
extra_vars.yaml
file found in vars/example_workshop_ansible_automation.yml
ansible-galaxy install -r collections/requirements.yml
cp vars/example_workshop_ansible_automation.yml ansible_automation.extra_vars.yml
## Edit the ansible_automation.extra_vars.yml file
ansible-playbook -e "@ansible_automation.extra_vars.yml" workshop_create_ansible_automation.yaml
ansible-playbook -e "@ansible_automation.extra_vars.yml" workshop_destroy_ansible_automation.yaml
If you're utilizing this repo with RHPDS Open Environments and wanting to deploy OpenShift there are a number of challenges:
So with that, there are a few prerequisites to using Blue Forge to deploy OpenShift 4 to IBM Cloud:
Zone 1 | Zone 2 | Zone 3 | Hostname Format | Additional Notes | |
---|---|---|---|---|---|
CIDR | 10.128.10.0/24 | 10.128.20.0/24 | 10.128.30.0/24 | ||
Proctor Bastion | 10.128.10.4 | bastion.{{ guid }}.{{ domain }} | |||
Bootstrap Node | 10.128.10.7 | bootstrap.{{ guid }}.{{ domain }} | |||
Load Balancer | 10.128.10.9 | lb.{{ guid }}.{{ domain }} | Also Pilot Light Server | ||
DNS | 10.128.10.10 | 10.128.20.10 | 10.128.30.10 | ns{{ index }}-{{ workshop_shortcode }}-{{ guid }}.{{ domain }} | |
RH IDM Server | 10.128.10.11 | 10.128.20.11 | 10.128.30.11 | idm{{ index }}.{{ guid }}.{{ domain }} | If enabled, BIND DNS upstream is changed to IDM servers |
Control Plane Node | 10.128.10.20 | 10.128.20.20 | 10.128.30.20 | ctrlp-{{ index }}.{{ guid }}.{{ domain }} | |
10.128.10.21 | 10.128.20.21 | 10.128.30.21 | Additional Optional Control Plane Nodes | ||
Infrastructure Nodes | 10.128.10.30 | 10.128.20.30 | 10.128.30.30 | infra-{{ index }}.{{ guid }}.{{ domain }} | Optional Infrastructure Nodes |
10.128.10.31 | 10.128.20.31 | 10.128.30.31 | Additional Optional Infrastructure Nodes | ||
Application Nodes | 10.128.10.40 | 10.128.20.40 | 10.128.30.40 | app-node-{{ index }}.{{ guid }}.{{ domain }} | |
10.128.10.41 | 10.128.20.41 | 10.128.30.41 | Additional Optional Application Nodes, in sets of 3 | ||
10.128.10.42 | 10.128.20.42 | 10.128.30.42 | |||
10.128.10.43 | 10.128.20.43 | 10.128.30.43 | |||
10.128.10.44 | 10.128.20.44 | 10.128.30.44 | |||
10.128.10.45 | 10.128.20.45 | 10.128.30.45 | |||
Minio S3 Server | 10.128.10.61 | s3.{{ guid }}.{{ domain }} | |||
GitLab Server | 10.128.10.62 | gitlab.{{ guid }}.{{ domain }} | |||
NFS Server | 10.128.10.63 | nfs.{{ guid }}.{{ domain }} |
TODO: Explore OCS for storage
Download the RHCOS QEMU QCow
Use guestfish
to modify the Grub boot kernel arguments and dracut to:
10.128.10.10
, 10.128.20.10
, and 10.128.30.10
- we'll deploy a few DIY DNS servers to map things properlyThe whole set of commands looks like this:
wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/latest/latest/rhcos-qemu.x86_64.qcow2.gz
gunzip rhcos-qemu.x86_64.qcow2.gz
guestfish -a rhcos-qemu.x86_64.qcow2
><fs> launch
><fs> mount /dev/sda1 /
><fs> vi /ignition.firstboot
INSERT the following into the /ignition.firstboot file:
set ignition_network_kcmdline='coreos.firstboot=1 rd.neednet=1 ip=dhcp nameserver=10.128.10.10 nameserver=10.128.20.10 nameserver=10.128.30.10 ignition.platform.id=metal ignition.config.url=http://10.128.10.9:8082/ignition-generator'
Then ESC
and :wq
out of INSERT mode and vi. Exit guestfish with the following:
><fs> shutdown
><fs> exit
Now with that modified QCow2 image, you can upload it to a Cloud Object Store bucket in your personal IBM Cloud account. Make sure to make the image publicly available if you're accessing from a different account such as one provisioned by RHPDS. You'll need the cos://
link.
Blue Forge's OpenShift deployments are already architected with the above layout, and will stage the infrastructure in the proper order.
All that you need to bring is:
cos://
link - you may have a friend who has one they can share...