equinix / terraform-provider-equinix

Terraform Equinix provider
https://deploy.equinix.com/labs/terraform-provider-equinix/
MIT License
46 stars 45 forks source link

docs: VMWare guide #181

Open displague opened 3 years ago

displague commented 3 years ago

The metal_device.custom_data field currently describes the generic behavior of the field. https://registry.terraform.io/providers/equinix/metal/latest/docs/resources/device#custom_data

When used with VMWare ESXi operating_system choices, the custom_data field offers specific features if structured properly:

resource "metal_device" "vmware" {
...
customdata = <<EOS
{
  "sshd": { "enabled": true, "pwauth": true },
  "rootpwcrypt": "..crypted passwd..",
  "esxishell": { "enabled": true },
  "kickstart": {
    "firstboot_url": "https:…",
    "firstboot_shell": "",
    "firstboot_shell_cmd": "",
    "postinstall_url": "https:…",
    "postinstall_shell": "",
    "postinstall_shell_cmd": ""
  }
}
EOS
}

We should update the customdata field to describe this special use-case by linking to a VMWare guide for the EM TF provider. This guide will serve a similar function to equinix/terraform-provider-equinix#189.

It would be helpful to describe how VMWare nodes can be provisioning without leaving them open to the public (through use of VLANs, gateways, and network-edge or metal routers).

The operating environment and scope should be described so users know how and where their provisioning scripts will run, and what they can take advantage of.

It should be possible, for example, to configure the networking of the node without using remote API calls (which we do here: https://github.com/equinix/terraform-metal-vsphere/blob/main/templates/esx_host_networking.py).

This guide would also be a good place to reference the existing modules, such as https://github.com/equinix/terraform-metal-vsphere (and the Tanzu module).

Any special VMWare OS variants offered by EM can also be discussed in this guide.

displague commented 3 years ago

Any special VMWare OS variants offered by EM can also be discussed in this guide.

The VCF variant should be explored in this guide (#124). Ideally, this configuration would take advantage of metal_port resources (#116) to attach numerous VLANs between a set of VCF nodes. custom_data would be used to configure the networking between devices.

displague commented 3 years ago

https://github.com/tinkerbell/boots/blob/master/installers/vmware/kickstart-esxi.go#L232-L342

The contents of the _cmd are sent to a /tmp/customize-pi-cmd.sh, made executable, and run with an interpreter.
The _shell fields define what interpreter to use, defaulting to /bin/sh -C.

The contents of the URL are fetched to /tmp/ks-postinstall-sup.sh, made executable, and run.

(First boot files are different, but the process is the same: /tmp/ks-firstboot-sup.sh and /tmp/fbshell.sh.)

displague commented 3 years ago

Alternatively, it may be easier to jsonencode HCL structures, which will allow for easier interpolation and consumption of file resources.

customdata = jsonencode({
  sshd =  {
    enabled =  true
    pwauth =  true
  }
  rootpwcrypt = "..crypted passwd.."
  esxishell = {
     enabled = true
  }
  kickstart = {
    // firstboot_url = "https:…"
    firstboot_shell = "/bin/sh -C"
    firstboot_shell_cmd =  file(join("/", [path.module, "assets/firstboot.sh"]))
    // postinstall_url = "https:…"
    postinstall_shell = "/bin/sh -C"
    postinstall_shell_cmd = file(join("/", [path.module, "assets/postinstall.sh"]))
  }
}