openshift / installer

Install an OpenShift 4.x cluster
https://try.openshift.com
Apache License 2.0
1.44k stars 1.39k forks source link

How to simplify installation of Okd 4.x? #3545

Closed magick93 closed 3 years ago

magick93 commented 4 years ago

How to simplify installation of Okd 4.x?

I'm currently planning on upgrading Openshift origin 3.11 to Okd 4x, and am seeking some guidence and clarification.

Previously, with 3.11, there was not the need for:

Questions

Are the load balances required? 3.11 shipped with HAProxy, which can work as a load balancer. Also, my organization uses Openshift internally with a total of approximately 5 users - so there isnt a need to balance load. And if we need them to install Openshift, it will simply more work with no added value.

On https://github.com/openshift/installer/blob/master/docs/user/vsphere/install_upi.md there is an "example install config for vSphere UPI" which includes vsphere api variables. Is this needed even when it is a UPI installation? If yes, is it so Openshift can manage/upgrade the hosts?

Does this mean ALL machines need internet access?

Set the vApp properties of the VM to set the Ignition config for the VM. The guestinfo.ignition.config.data property is the base64-encoded Ignition config. The guestinfo.ignition.config.data.encoding should be set to base64.

The Ignition config supplied in the vApp properties of the bootstrap VM should be an Ignition config that has a URL from which the bootstrap VM can download the bootstrap.ign created by the OpenShift Installer. Note that the URL must be accessible by the bootstrap VM.

The example config file has lots of empty variables. Are these needed or can they be left out?

Is it possible to just copy the Ignition config created by the OpenShift Installer into the boostrap VM, rather than:

  1. converting config A to base64
  2. adding this to vApp properties
  3. boostrap machine reads from vApp properties
  4. downloads config B

It is one ansible task to copy the Ignition config created by the OpenShift Installer into the boostrap VM, rather than several tasks to achieve the above. Or, why not just have an env var in the bootstrap vm that has the url for downloading the config - this way its very generic, can be created using ssh, and isnt tied to a particular provider feature such as using vApp properties is.

The vsphere example on https://github.com/openshift/installer/blob/master/upi/vsphere/README.md says "At a minimum, you need to set values for the following variables." One of the variables is ipam_token - what is this?

Step 5 says:

Ensure that you have you AWS profile set and a region specified. The installation will use create AWS route53 resources for routing to the OpenShift cluster.

Why is an AWS profile needed? We would like to use our own, internal, DNS server - is that ok?

Bootstrap VM questions

Static IP Addresses and Hostname

We are currently using https://docs.ansible.com/ansible/latest/modules/vmware_guest_module.html to create VMs, and we give them a static IP and hostname in this process.

Can we continue to use this or do we need to use Terraform now?

Installer releases

On https://github.com/openshift/installer/releases/tag/v0.16.1 it says:

We are no longer cutting Git(Hub) releases for the installer. Future releases will be published to https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/

Github has a feature whereby interested parties can be notified when there is a new release. Out of interest, why did you move away from publishing releases from GH, and how best can we now keep up-to-date on new installer releases?

From https://docs.openshift.com/container-platform/4.3/installing/installing_bare_metal/installing-bare-metal.html#csr_management_installing-bare-metal, it says:

Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation.

In 3.11 and prior we did not have "automatic machine management" yet we didnt need to manually approve cluster certificate signing requests (CSRs) after installation. Is there plans to fix this?

How does one add a new worker node for a UPI cluster?

In general, my main concerns are:

abhinavdahiya commented 4 years ago

All the upi/ are examples of how users can use the user-provisioned-workflow to install clusters.

here are the requirements for vpshere https://docs.openshift.com/container-platform/4.4/installing/installing_vsphere/installing-vsphere.html or baremetal https://docs.openshift.com/container-platform/4.4/installing/installing_bare_metal/installing-bare-metal.html

We define the requirements and do not prescribe the tools, terraform vs ansible you can choose. how to do LB is completely up to you. haproxy etc.

For static IP questions for vpshere, I think the upstream docs that I linked above can provide more details. or you can reach out to https://github.com/openshift/os .

For the release on github,

Red Hat already has ways to ship content to users, and we have decided to use those instead of using Github for the same thing. It seems that was easy for the release team.

For CSR flow in 4.x

As per security stance for OpenShift, we cannot automatically approve any machine to join the cluster. The user needs to approve the machine to join the cluster. The platform supports machine-api to add nodes to the cluster, and approval is automatically only handled or them.

openshift-bot commented 4 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 4 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

openshift-bot commented 3 years ago

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close

openshift-ci-robot commented 3 years ago

@openshift-bot: Closing this issue.

In response to [this](https://github.com/openshift/installer/issues/3545#issuecomment-742091977): >Rotten issues close after 30d of inactivity. > >Reopen the issue by commenting `/reopen`. >Mark the issue as fresh by commenting `/remove-lifecycle rotten`. >Exclude this issue from closing again by commenting `/lifecycle frozen`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.