kenmoini / ocp4-ai-svc-universal

Ansible automated multi-platform deployment of OpenShift 4 via the Assisted Installer Service
MIT License
5 stars 12 forks source link
ansible automation openshift

OpenShift Assisted Installer Service, Universal Deployer

This set of resources handles an idempotent way to deploy OpenShift via the Assisted Installer Service to any number of infrastructure platforms.

Ansible Lint EE Build & Deploy

Features

Supported Infrastructure Platforms

Upcoming Infrastructure Platforms


Operations

The following tasks performed by the different Playbooks are:

bootstrap.yaml

destroy.yaml

extras-create-sushy-bmh.yaml

This extra Playbook will create some VMs on a Libvirt or VMWare infrastructure provider that will not be turned on in order to act as virtual Bare Metal Hosts via sushy-tools.

ansible-playbook -e "@credentials-infrastructure.yaml" \
  --skip-tags=infra_libvirt_boot_vm,vmware_boot_vm,infra_libvirt_per_provider_setup,vmware_upload_iso \
  extras-create-sushy-bmh.yaml

Prerequisites

Red Hat Registry Pull Secret

To deploy OpenShift in a connected environment, you need to provide a registry pull secret. Get yours from here: https://cloud.redhat.com/openshift/install/pull-secret

It's suggested to use the Copy-to-Clipboard button to copy the registry pull secret to the clipboard. Then, paste it into a file somewhere like ~/ocp-pull-secret - make sure there is no white space in the JSON structure.

Red Hat API Offline Token

To use the Red Hat hosted Assisted Install Service, you need to provide a Red Hat API Offline Token. Get yours from here: https://access.redhat.com/management/api

Take the token and store it in a file somewhere like ~/rh-api-offline-token

One-time | Installing oc

There are a few Ansible Tasks that use the command module to execute commands best/easiest serviced by the oc binary - thusly, oc needs to be available in the system path

## Create a binary directory if needed
sudo mkdir -p /usr/local/bin
echo 'export PATH="/usr/local/bin:$PATH"' | sudo tee /etc/profile.d/usrlibbin.sh
sudo chmod a+x /etc/profile.d/usrlibbin.sh
source /etc/profile.d/usrlibbin.sh

## Download the latest oc binary
mkdir -p /tmp/bindl
cd /tmp/bindl
wget https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux.tar.gz
tar zxvf openshift-client-linux.tar.gz

## Set permissions and move it to the bin dir
sudo chmod a+x kubectl
sudo chmod a+x oc

sudo mv kubectl /usr/local/bin
sudo mv oc /usr/local/bin

## Clean up
cd -
rm -rf /tmp/bindl

Usage

Clone the repo

git clone http://github.com/kenmoini/ocp4-ai-svc-universal.git
cd ocp4-ai-svc-universal

One-time | Installing Needed Pip Packages

Before running this Ansible content, you will need to install the kubernetes and openshift pip packages, among others - you can do so in one shot by running the following command:

python3 -m pip install --upgrade -r requirements.txt

One-time | Installing Ansible Collections

In order to run this Playbook you'll need to have the needed Ansible Collections already installed - you can do so easily by running the following command:

ansible-galaxy collection install -r collections/requirements.yml

Note: If you're planning on using the Nutanix infrastructure deployment options, you'll need to also manually run ansible-galaxy collection install nutanix.ncp to install the Nutanix collection due to how it hard codes pip modules and the conflicts it causes during the generation of an Execution Environment.

Modify the Variables files

ls example_vars/cluster-config-*
cp example_vars/cluster-config-selected.yaml CLUSTER_NAME.cluster-config.yaml

Running the Playbook

With the needed variables altered, you can run the Playbook with the following command:

ansible-playbook -e "@CLUSTER_NAME.cluster-config.yaml" -e "@credentials-infrastructure.yaml" bootstrap.yaml

Destroying the Cluster

If you are done with the cluster or some error occurred you can quickly delete it from your infrastructure environments, the Assisted Installer Service, and the local assets that were generated during creation:

ansible-playbook -e "@CLUSTER_NAME.cluster-config.yaml" -e "@credentials-infrastructure.yaml" destroy.yaml