Open fasmat opened 4 years ago
Right, we need packer template for cloud-init config. Some part of code:
"boot_command": [
"<enter><wait><enter><wait><f6><esc>",
"autoinstall ds=nocloud-net;s=http://{{.HTTPIP}}:{{.HTTPPort}}/",
"<enter>"
],
One problem I have is that grub now automatically boots the live system after a few seconds and before packer starts typing the boot_command
over VNC. Maybe this can be adjusted with the boot_wait
parameter but I didn't try yet.
"iso_checksum": "36f15879bd9dfd061cd588620a164a82972663fdd148cce1f70d57d314c21b73",
"iso_url": "http://cdimage.ubuntu.com/ubuntu-legacy-server/releases/20.04/release/ubuntu-20.04-legacy-server-amd64.iso",
@JulyIghor Thanks for the link to the legacy installer. This can be used as a substitute for now, but Ubuntu has been using subiquity since Ubuntu 18.04 as the official installer and with 20.04 dropped support for d-i.
On top of that, the new installer runs its own ssh server during installation which packer picks up on in error. We actuslly need to let the autoinstall finish AND reboot, which THEN runs the ssh server we want. No idea how to accomplish that so far. :/
On top of that, the new installer runs its own ssh server during installation which packer picks up on in error. We actuslly need to let the autoinstall finish AND reboot, which THEN runs the ssh server we want. No idea how to accomplish that so far. :/
Actually with cloud-init we can do everything without SSH server at all. So this packer template is deprecated.
I have managed to use this https://github.com/hashicorp/packer/issues/9115#issuecomment-619445197 and get #cloud-config applied. But it does not work in UEFI mode. As I understand it requires to add additional parameters to initd command line. I would fix it if I knew Ubuntu better. I got grub command line, what to do next to get it booted?
@JulyIghor how did you achieve to get to the grub command line? For me Grub always boots up ubuntu before packer starts typing, so unless I manually intervene the boot command isn't sent to the VM before it is too late.
I'm unfortunately also not sure how to proceed. According to https://wiki.ubuntu.com/FoundationsTeam/AutomatedServerInstalls/QuickStart the boot_command you provided should already be sufficient to install the ubuntu system, assuming the cloud-config contains all information necessary for an autoinstall.
@JulyIghor how did you achieve to get to the grub command line? For me Grub always boots up ubuntu before packer starts typing, so unless I manually intervene the boot command isn't sent to the VM before it is too late.
I'm unfortunately also not sure how to proceed. According to https://wiki.ubuntu.com/FoundationsTeam/AutomatedServerInstalls/QuickStart the boot_command you provided should already be sufficient to install the ubuntu system, assuming the cloud-config contains all information necessary for an autoinstall.
You actually just need to be quick enough to stop the ISO's grub boot loader from starting the installer and instead type the boot command. In my experiments, based on the work of @geerlingguy and @nickcharlton, a boot wait of 5s was spot on. The exact value for you may differ depending on the system performance.
@JulyIghor how did you achieve to get to the grub command line? For me Grub always boots up ubuntu before packer starts typing, so unless I manually intervene the boot command isn't sent to the VM before it is too late.
I'm unfortunately also not sure how to proceed. According to https://wiki.ubuntu.com/FoundationsTeam/AutomatedServerInstalls/QuickStart the boot_command you provided should already be sufficient to install the ubuntu system, assuming the cloud-config contains all information necessary for an autoinstall.
I was able to do it only when UEFI disabled in VM bios. And this is not we need.
FYI - this is the work from @nickcharlton with a working cloud-init configuration template for 20.04:
In the mean time, if someone wants to take a look at this, I've written up my notes which have a working configuration: https://nickcharlton.net/posts/automating-ubuntu-2004-installs-with-packer.html
(via https://github.com/chef/bento/issues/1281#issuecomment-619635873)
On top of that, the new installer runs its own ssh server during installation which packer picks up on in error. We actuslly need to let the autoinstall finish AND reboot, which THEN runs the ssh server we want. No idea how to accomplish that so far. :/
On this problem specifically, @rubenst2013, a potential solution is to use pause_before_connecting
to stop us connecting to the installer's ssh
.
In practice with my experiments, I'd been seeing random build failures because of it.
FYI - this is the work from @nickcharlton with a working cloud-init configuration template for 20.04:
In the mean time, if someone wants to take a look at this, I've written up my notes which have a working configuration: https://nickcharlton.net/posts/automating-ubuntu-2004-installs-with-packer.html
HTTP url is correct? or it should be http://{{.HTTPIP}}:{{.HTTPPort}} ?
FYI - this is the work from @nickcharlton with a working cloud-init configuration template for 20.04:
In the mean time, if someone wants to take a look at this, I've written up my notes which have a working configuration: https://nickcharlton.net/posts/automating-ubuntu-2004-installs-with-packer.html
on Parallels Desktop it does not work, ALT+F2 doing nothing, but 'E' does open editor where I can add command.
Ah, nice catch @JulyIghor. It should indeed be: http://{{ .HTTPIP }}:{{ .HTTPPort }}/
. Jekyll/the markdown parser is breaking it.
This boot command should work, but it doesn't
"boot_command": [
"<tab><tab><tab><tab><tab><c><wait><bs><bs>",
"set gfxpayload=keep", "<enter>",
"linux /casper/vmlinuz quiet autoinstall ds=nocloud-net;s=http://{{.HTTPIP}}:{{.HTTPPort}}/ ---", "<enter>",
"initrd /casper/initrd", "<enter>",
"boot", "<enter>"
]
This boot command should work, but it doesn't
"boot_command": [ "<tab><tab><tab><tab><tab><c><wait><bs><bs>", "set gfxpayload=keep", "<enter>", "linux /casper/vmlinuz quiet autoinstall ds=nocloud-net;s=http://{{.HTTPIP}}:{{.HTTPPort}}/ ---", "<enter>", "initrd /casper/initrd", "<enter>", "boot", "<enter>" ]
It did no requests to http server, and I got Language selection dialog
I'm wondering if the existing value for the boot options is what's causing this (unless you're entering them differently from my one): I'm appending autoinstall ...
onto what's already there, so the boot command ends up being:
initrd=/casper/initrd quiet -- autoinstall ds=nocloud-net;s=http://{{.HTTPIP}}:{{.HTTPPort}}/
I'm wondering if the existing value for the boot options is what's causing this (unless you're entering them differently from my one): I'm appending
autoinstall ...
onto what's already there, so the boot command ends up being:initrd=/casper/initrd quiet -- autoinstall ds=nocloud-net;s=http://{{.HTTPIP}}:{{.HTTPPort}}/
@nickcharlton @JulyIghor We could schedule a workshop via discord or some other platform to figure this out and then post the results here. what do you think? 💡
@nickcharlton @JulyIghor We could schedule a workshop via discord or some other platform to figure this out and then post the results here. what do you think? 💡
https://t.me/joinchat/CO-Y3hxWngKWrsmTUsPV7Q - the chat history lost, sorry
I'm wondering if the existing value for the boot options is what's causing this (unless you're entering them differently from my one): I'm appending
autoinstall ...
onto what's already there, so the boot command ends up being:initrd=/casper/initrd quiet -- autoinstall ds=nocloud-net;s=http://{{.HTTPIP}}:{{.HTTPPort}}/
Here is working grub command line
set gfxpayload=keep
linux /casper/vmlinuz "ds=nocloud-net;s=http://{{.HTTPIP}}:{{.HTTPPort}}/" quiet autoinstall ---
initrd /casper/initrd
boot
Hi @JulyIghor, The quotes on the command line didn't work for me when I tried them out of curiosity. Though I think you are simply missing a dash. As explained here, grub uses a tripple-dash to separate different command line parts: https://stackoverflow.com/questions/11552950/triple-dash-on-linux-kernel-command-line-switches
Hi @JulyIghor, The quotes on the command line didn't work for me when I tried them out of curiosity. Though I think you are simply missing a dash. As explained here, grub uses a tripple-dash to separate different command line parts: https://stackoverflow.com/questions/11552950/triple-dash-on-linux-kernel-command-line-switches
This example is about grub config. I'm talking about grub command line. If I remove quotes, it just ignored. As it said in the grub welcome screen, it works bash like. So quotes required to escape some characters I think.
@nickcharlton I'm trying to get your example to work on MacOS with Qemu and I see the SSH handshaking timing out eventually, but also when connecting over VNC I can see the setup screen sitting there and not progressing. Also the password "ubuntu" doesn't work for the SSH server that's started.
{
"builders": [
{
"boot_command": [
"<enter><enter><f6><esc><wait> ",
"autoinstall ds=nocloud-net;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/",
"<enter>"
],
"boot_wait": "5s",
"disk_interface": "virtio",
"format": "qcow2",
"http_directory": "http",
"iso_checksum": "sha256:caf3fd69c77c439f162e2ba6040e9c320c4ff0d69aad1340a514319a9264df9f",
"iso_url": "http://releases.ubuntu.com/20.04/ubuntu-20.04-live-server-amd64.iso",
"memory": 1024,
"name": "ubuntu-2004",
"net_device": "virtio-net",
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now",
"ssh_timeout": "20m",
"ssh_password": "ubuntu",
"ssh_username": "ubuntu",
"vm_name": "ubuntu-install",
"type": "qemu",
"headless": true
}
],
"provisioners": [
{
"inline": [
"ls /"
],
"type": "shell"
}
]
}
If you can offer up any guidance it would be appreciated.
This is based upon your example repo with the Qemu provider in place.
Right before that I see:
Then:
Finally the installer starts and sits there, or it just bombs out with this error:
2020/06/24 13:03:41 packer-builder-qemu plugin: failed to unlock port lockfile: close tcp 127.0.0.1:5970: use of closed network connection
2020/06/24 13:03:41 packer-builder-qemu plugin: failed to unlock port lockfile: close tcp 127.0.0.1:3368: use of closed network connection
==> ubuntu-2004: Error waiting for SSH: Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain
==> ubuntu-2004: Deleting output directory...
2020/06/24 13:03:41 [INFO] (telemetry) ending qemu
2020/06/24 13:03:41 machine readable: error-count []string{"1"}
==> Some builds didn't complete successfully and had errors:
2020/06/24 13:03:41 machine readable: ubuntu-2004,error []string{"Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain"}
==> Builds finished but no artifacts were created.
2020/06/24 13:03:41 [INFO] (telemetry) Finalizing.
Build 'ubuntu-2004' errored: Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain
==> Some builds didn't complete successfully and had errors:
--> ubuntu-2004: Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain
==> Builds finished but no artifacts were created.
2020/06/24 13:03:42 waiting for all plugin processes to complete...
2020/06/24 13:03:42 /usr/local/bin/packer: plugin process exited
2020/06/24 13:03:42 /usr/local/bin/packer: plugin process exited
Otherwise it just hangs with Nick's example here:
Having tried another well-documented example with QEMU on MacOS, I get the same result -
@JulyIghor how did you achieve to get to the grub command line? For me Grub always boots up ubuntu before packer starts typing, so unless I manually intervene the boot command isn't sent to the VM before it is too late.
I'm unfortunately also not sure how to proceed. According to https://wiki.ubuntu.com/FoundationsTeam/AutomatedServerInstalls/QuickStart the boot_command you provided should already be sufficient to install the ubuntu system, assuming the cloud-config contains all information necessary for an autoinstall.
Did you have any success? I am facing the same issue.
@JulyIghor how did you achieve to get to the grub command line? For me Grub always boots up ubuntu before packer starts typing, so unless I manually intervene the boot command isn't sent to the VM before it is too late. I'm unfortunately also not sure how to proceed. According to https://wiki.ubuntu.com/FoundationsTeam/AutomatedServerInstalls/QuickStart the boot_command you provided should already be sufficient to install the ubuntu system, assuming the cloud-config contains all information necessary for an autoinstall.
Did you have any success? I am facing the same issue.
I used it with Parallels Desktop, so maybe it is difference.
This gets a lot further with VirtualBox. I haven't seen if it gets all the rest to the end but it got much further along and is doing the installation. Timing issue with qemu on Mac perhaps?
With VirtualBox, this is how far it gets:
==> ubuntu-20.04-live-server: Connected to SSH!
==> ubuntu-20.04-live-server: Uploading VirtualBox version info (6.1.10)
==> ubuntu-20.04-live-server: Uploading VirtualBox guest additions ISO...
==> ubuntu-20.04-live-server: Provisioning with shell script: /var/folders/p7/ptxtv_pd3n12fd6mc5wjhk1h0000gn/T/packer-shell078496510
ubuntu-20.04-live-server: bin dev lib libx32 mnt root snap sys var
ubuntu-20.04-live-server: boot etc lib32 lost+found opt run srv tmp
ubuntu-20.04-live-server: cdrom home lib64 media proc sbin swap.img usr
==> ubuntu-20.04-live-server: Gracefully halting virtual machine...
ubuntu-20.04-live-server: Failed to set wall message, ignoring: Interactive authentication required.
ubuntu-20.04-live-server: Failed to power off system via logind: Interactive authentication required.
ubuntu-20.04-live-server: Failed to open initctl fifo: Permission denied
ubuntu-20.04-live-server: Failed to talk to init daemon.
==> ubuntu-20.04-live-server: Timeout while waiting for machine to shutdown.
==> ubuntu-20.04-live-server: Provisioning step had errors: Running the cleanup provisioner, if present...
==> ubuntu-20.04-live-server: Cleaning up floppy disk...
==> ubuntu-20.04-live-server: Deregistering and deleting VM...
==> ubuntu-20.04-live-server: Deleting output directory...
Build 'ubuntu-20.04-live-server' errored: Timeout while waiting for machine to shutdown.
==> Some builds didn't complete successfully and had errors:
--> ubuntu-20.04-live-server: Timeout while waiting for machine to shutdown.
==> Builds finished but no artifacts were created.
Config:
{
"builders": [
{
"boot_command": [
"<enter><enter><f6><esc><wait> ",
"autoinstall ds=nocloud-net;seedfrom=http://{{ .HTTPIP }}:{{ .HTTPPort }}/",
"<enter><wait>"
],
"boot_wait": "5s",
"format": "ovf",
"headless": true,
"http_directory": "http",
"iso_checksum": "sha256:caf3fd69c77c439f162e2ba6040e9c320c4ff0d69aad1340a514319a9264df9f",
"iso_urls": [
"iso/ubuntu-20.04-live-server-amd64.iso",
"https://releases.ubuntu.com/20.04/ubuntu-20.04-live-server-amd64.iso"
],
"memory": 1024,
"name": "ubuntu-20.04-live-server",
"output_directory": "output/live-server",
"shutdown_command": "shutdown -P now",
"ssh_handshake_attempts": "20",
"ssh_password": "ubuntu",
"ssh_pty": true,
"ssh_timeout": "20m",
"ssh_username": "ubuntu",
"type": "virtualbox-iso",
"guest_os_type": "Ubuntu_64"
}
],
"provisioners": [
{
"inline": [
"ls /"
],
"type": "shell"
}
]
}
With http/user-data
:
#cloud-config
autoinstall:
version: 1
locale: en_US
keyboard:
layout: en
variant: us
network:
network:
version: 2
ethernets:
ens33:
dhcp4: true
storage:
layout:
name: lvm
identity:
hostname: ubuntu
username: ubuntu
password: $6$rounds=4096$8dkK1P/oE$2DGKKt0wLlTVJ7USY.0jN9du8FetmEr51yjPyeiR.zKE3DGFcitNL/nF1l62BLJNR87lQZixObuXYny.Mf17K1
ssh:
install-server: yes
user-data:
disable_root: false
late-commands:
- 'sed -i "s/dhcp4: true/&\n dhcp-identifier: mac/" /target/etc/netplan/00-installer-config.yaml'
- echo 'ubuntu ALL=(ALL) NOPASSWD:ALL' > /target/etc/sudoers.d/ubuntu
@alexellis in my repository: https://github.com/fasmat/ubuntu you find a working configuration for both Virtualbox and VMWare.
The end game is to produce a small image that can be dd'd to a machine. I just tried that and got a bit further, but the raw image file is 39GB!
space-mini:live-server2 alex$ # VBoxManage clonehd ./packer-ubuntu-20.04-live-server-1593009318-disk001.vmdk image.img --format raw
space-mini:live-server2 alex$ du -h image.img
39G image.img
space-mini:live-server2 alex$
@fasmat I think that brings us back to the original issue - I don't need a box image and don't want to use Vagrant, this is to produce an image that can be flashed to the hard disk of a real server. Would your example also work for Ubuntu Server (subject to changing the ISO image URL and SHA)?
It uses the ubuntu live server iso to create the VM, the final image (after installing the ubuntu-desktop package and some other tools) is ~ 1 GB. I'm building the images on macOS.
@alexellis concerning the size of the image: it's a raw image containing the whole hard disk. If you zip this file you will see that the file easily compresses to < 1 GB. If you want the file to be smaller uncompressed then you have to setup the partition size during the installation to be less than 40 (which is I believe the default).
Yes of course: 960M image.img.tgz
<- through tar zcf
@alexellis in my repository: https://github.com/fasmat/ubuntu you find a working configuration for both Virtualbox and VMWare.
Maybe your configuration works due to the headless parameter. I am using the vsphere-iso. The boot_command works right after the first installation interface (GUI) only. Then, I have to esc myself and wait the boot_command to write the commands. The headless is not supported for the vsphere-iso.
So far as I understand it, the issue you're getting @alexellis is that the installer's SSH session is being connected to. The password is set for the user that's (ubuntu
) in the user-data
, but the installer one is randomised (you can get this by watching the installer manually, but in our case here we don't want that).
Increasing the SSH handshake just allows Packer to fail a lot (i.e.: during the install) until the machine is booted after the install is completed. This is what's causing everyone to have such a non-determinate install experience — 20
is just what seemed to work reliably for me. I'd try bumping it up and seeing what happens with Packer in debug mode.
The other option is to change the port the installer SSH is opened on (which I think is in this issue or one linked from it somewhere).
Otherwise it just hangs with Nick's example here:
This is because you are missing the window to escape out of the live boot. Try changing the "boot_wait" to 2 seconds.
Just wanted to add I'm experiencing the same issue with the live cd (though not using Vagrant, just creating a vSphere template). I've tried incrementing the boot_wait value 1s at a time and it's just not consistent with how long it takes at startup.
I found the following Medium post (https://medium.com/@tlhakhan/ubuntu-server-20-04-autoinstall-2e5f772b655a) that touches on this issue a bit and suggests using a iPXE boot environment over packer but this obviously isn't an option for everyone (including me).
@stevenmiller I have it working on vsphere-iso with a 2s wait.
@stevenmiller I have it working on vsphere-iso with a 2s wait.
I just tried with 2s again and it worked, however the build ended up erroring out with an separate ssh communicator problem. Went to re-run the packer build and it hasn't worked since, it always misses and ends up at the language selection screen of the live installer.
@stevenmiller here is what I use:
{
"CPUs": "{{ user `cpus` }}",
"RAM": "{{ user `memory` }}",
"RAM_reserve_all": true,
"boot_command": [
"<esc><esc><esc>",
"<enter><wait>",
"/casper/vmlinuz ",
"root=/dev/sr0 ",
"initrd=/casper/initrd ",
"autoinstall ds=nocloud-net;s=http://{{.HTTPIP}}:{{.HTTPPort}}/",
"<enter>"
],
"boot_wait": "2s",
"convert_to_template": true,
"disk_controller_type": "pvscsi",
"guest_os_type": "ubuntu64Guest",
"host": "{{user `esxi_host`}}",
"insecure_connection": "true",
"iso_checksum": "{{user `iso_checksum_type`}}:{{user `iso_checksum`}}",
"iso_urls": [
"{{user `iso_path`}}/{{user `iso_name`}}"
],
"network_adapters": [
{
"network": "VM Network",
"network_card": "vmxnet3"
}
],
"password": "{{user `esxi_password`}}",
"ssh_password": "jacaladmin",
"ip_settle_timeout": "5m",
"ssh_port": 22,
"ssh_username": "jacal",
"storage": [
{
"disk_size": "{{ user `disk_size` }}",
"disk_thin_provisioned": true
}
],
"type": "vsphere-iso",
"username": "{{user `esxi_username`}}",
"vcenter_server": "{{user `vcenter_server`}}",
"vm_name": "{{ user `template` }}",
"http_directory": "{{template_dir}}/http",
"ip_wait_address": "{{user `wait_address`}}",
"tools_sync_time": true
}
Packer version:
goffinf@DESKTOP-LH5LG1V:~$ packer version
Packer v1.6.0
Builder: proxmox
Looks like plenty of us are seeing a similar problem. I too just get stuck .. Waiting for SSH to become available
The boot_command does appear to get me through to the right place (initially the boot_command wasn't quick enough and I ended up at the language selection for the manual install as @alexellis described above, but reducing boot_wait to 2s fixed that). I added a wait20 after the first enter so I could check ..
"builders": [
{
"boot_command": [
"<enter><wait20><enter><f6><esc><wait>",
"autoinstall ds=nocloud-net;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/",
"<enter>"
],
...
I see this during the wait:
Then this after:
and then this ....
until SSH times out and Packer cleans up ...
I think I have looked at all the links within this thread and many others .. and whilst there have been a number of helpful suggestions I haven't been able to find anything that allows SSH to connect
I did note that if you manually install, and auto-install user-data config is stored at this location
/var/log/installer/autoinstall-user-data.
So I tried using that, but, not unexpectedly, it didn't get around the SSH connection issue.
I have attached the Packer json and user-data that I am using below (meta-data is an empty file per the instructions).
If anyone has any further suggestions I would be most grateful.
Regards
Fraser.
host.json
{
"builders": [
{
"boot_command": [
"<enter><wait20><enter><f6><esc><wait>",
"autoinstall ds=nocloud-net;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/",
"<enter>"
],
"boot_wait": "{{user `boot_wait`}}",
"disks": [
{
"disk_size": "{{user `home_volume_size`}}",
"storage_pool": "local-lvm",
"storage_pool_type": "lvm-thin",
"type": "scsi",
"format": "raw"
}
],
"http_directory": "{{user `http_directory`}}",
"insecure_skip_tls_verify": true,
"iso_checksum": "{{user `iso_checksum_type`}}:{{user `iso_checksum`}}",
"iso_file": "{{user `iso_file`}}",
"memory": 2048,
"name": "ubuntu-20-04-base",
"network_adapters": [
{
"bridge": "vmbr0",
"model": "virtio"
}
],
"node": "{{user `proxmox_target_node`}}",
"password": "{{user `proxmox_server_pwd`}}",
"proxmox_url": "https://{{user `proxmox_server_hostname`}}:{{user `proxmox_server_port`}}/api2/json",
"ssh_handshake_attempts": "50",
"ssh_username": "{{user `ssh_username`}}",
"ssh_password": "{{user `ssh_password`}}",
"ssh_pty": true,
"ssh_timeout": "{{user `ssh_timeout`}}",
"type": "proxmox",
"unmount_iso": true,
"username": "{{user `proxmox_server_user`}}"
}
],
"provisioners": [
{
"execute_command": "{{ .Vars }} sudo -E -S sh '{{ .Path }}'",
"inline": [
"ls /"
],
"type": "shell"
}
],
"variables": {
"boot_wait": "2s",
"http_directory": "http",
"iso_checksum": "caf3fd69c77c439f162e2ba6040e9c320c4ff0d69aad1340a514319a9264df9f",
"iso_checksum_type": "sha256",
"iso_file": "local:iso/ubuntu-20.04-live-server-amd64.iso",
"proxmox_server_hostname": "proxmox-002",
"proxmox_server_port": "8006",
"proxmox_server_pwd": "xxxxxxxxxx",
"proxmox_server_user": "root@pam",
"proxmox_target_node": "home",
"ssh_handshake_attempts": "20",
"ssh_password": "ubuntu",
"ssh_username": "ubuntu",
"ssh_timeout": "10m"
}
}
user-data:
#cloud-config
autoinstall:
identity:
hostname: ubuntu-20-04-base
password: "$6$exDY1mhS4KUYCE/2$zmn9ToZwTKLhCw.b4/b.ZRTIZM30JZ4QrOQ2aOXJ8yk96xpcCof0kxKwuX1kqLG/ygbJ1f8wxED22bTL4F46P0"
#password: '$6$wdAcoXrU039hKYPd$508Qvbe7ObUnxoj15DRCkzC3qO7edjH0VV7BPNRDYK4QR8ofJaEEF2heacn0QgD.f8pO8SNp83XNdWG6tocBM1'
username: ubuntu
keyboard:
layout: en
variant: 'gb'
late-commands:
- sed -i 's/^#*\(send dhcp-client-identifier\).*$/\1 = hardware;/' /target/etc/dhcp/dhclient.conf
- 'sed -i "s/dhcp4: true/&\n dhcp-identifier: mac/" /target/etc/netplan/00-installer-config.yaml'
locale: en_GB
network:
network:
version: 2
ethernets:
ens18:
dhcp4: true
dhcp-identifier: mac
ssh:
allow-pw: true
authorized-keys:
- "ssh-rsa AAAAB3N......"
install-server: true
version: 1
Also experiencing this bailing out on Waiting for SSH to become available...
in the vsphere-iso builder. Tried to tweak with "boot_wait": "2s"
as suggested above by @jhawk28, varying it between 2 and 5s with fingers crossed. Also adjusted my packer json to match that shared in https://github.com/hashicorp/packer/issues/9115#issuecomment-653175050 but no luck so far.
Just to Share with everyone what has been working for me.
ubuntu2004_x64.json
:
{
"builders": [
{
"CPUS": "{{ user `CPUS` }}",
"RAM": "{{ user `RAM` }}",
"boot_command": [
"<esc><esc><esc>",
"<enter><wait>",
"/casper/vmlinuz ",
"root=/dev/sr0 ",
"initrd=/casper/initrd ",
"autoinstall ds=nocloud-net;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ubuntu/",
"<enter>"
],
"boot_order": "disk,cdrom",
"boot_wait": "5s",
"cluster": "{{ user `cluster` }}",
"communicator": "ssh",
"convert_to_template": true,
"datacenter": "{{ user `datacenter` }}",
"datastore": "{{ user `datastore` }}",
"disk_controller_type": "pvscsi",
"folder": "{{ user `folder` }}",
"guest_os_type": "{{ user `guest_os_type` }}",
"http_directory": "{{ template_dir }}/../http",
"insecure_connection": "{{ user `insecure_connection` }}",
"ip_settle_timeout": "5m",
"iso_checksum": "{{ user `iso_checksum` }}",
"iso_urls": ["{{ user `iso_url` }}"],
"network_adapters": [
{
"network": "{{ user `network` }}",
"network_card": "vmxnet3"
}
],
"password": "{{ user `password` }}",
"resource_pool": "",
"shutdown_command": "{{ user `shutdown_command` }}",
"ssh_handshake_attempts": "20",
"ssh_password": "{{ user `ssh_password` }}",
"ssh_username": "{{ user `ssh_username` }}",
"storage": [
{
"disk_size": "{{ user `disk_size` }}",
"disk_thin_provisioned": true
}
],
"type": "vsphere-iso",
"username": "{{ user `username` }}",
"vcenter_server": "{{ user `vcenter_server` }}",
"vm_name": "{{ user `vm_name` }}-{{ timestamp }}",
"vm_version": "{{ user `vm_version` }}"
}
],
"post-processors": [
{
"output": "{{ template_dir }}/packer-manifest.json",
"strip_path": true,
"type": "manifest"
}
],
"provisioners": [
{
"scripts": [
"{{ template_dir }}/../scripts/base.sh",
"{{ template_dir }}/../scripts/vmware.sh",
"{{ template_dir }}/../scripts/cleanup.sh",
"{{ template_dir }}/../scripts/zerodisk.sh"
],
"type": "shell"
}
],
"variables": {
"CPUS": "1",
"RAM": "1024",
"cluster": "",
"datacenter": "",
"datastore": "",
"disk_size": "8192",
"folder": "",
"guest_os_type": "ubuntu64Guest",
"insecure_connection": "true",
"iso_checksum": "caf3fd69c77c439f162e2ba6040e9c320c4ff0d69aad1340a514319a9264df9f",
"iso_url": "http://releases.ubuntu.com/20.04/ubuntu-20.04-live-server-amd64.iso",
"network": "",
"password": "",
"shutdown_command": "sudo /sbin/halt -p",
"ssh_password": "",
"ssh_username": "ubuntu",
"username": "",
"vcenter_server": "",
"vm_name": "ubuntu2004_x64",
"vm_version": "10"
}
}
user-data
:
#cloud-config
autoinstall:
version: 1
early-commands:
# Block inbound SSH to stop Packer trying to connect during initial install
- iptables -A INPUT -p tcp --dport 22 -j DROP
identity:
hostname: ubuntu-server
password: "$6$3yklPgGbsS$yqLzE7Oag1Bk97a/tpAnr5BpgysH.6lpSoROGhyrlbGkHKmZ/hwWZytPXhClUCXFH2w61zC0Poot48bMXjDJF1" # generate with mkpasswd -m sha-512
username: ubuntu
late-commands:
- sed -i 's/^%sudo.*/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g' /etc/sudoers
- sed -i 's/^#*\(send dhcp-client-identifier\).*$/\1 = hardware;/' /target/etc/dhcp/dhclient.conf
- "echo 'Defaults:ubuntu !requiretty' > /target/etc/sudoers.d/ubuntu"
- "echo 'ubuntu ALL=(ALL) NOPASSWD: ALL' >> /target/etc/sudoers.d/ubuntu"
- "chmod 440 /target/etc/sudoers.d/ubuntu"
- 'sed -i "s/dhcp4: true/&\n dhcp-identifier: mac/" /target/etc/netplan/00-installer-config.yaml'
packages:
- bc
- curl
- lsb-release
- ntp
- open-vm-tools
- openssh-server
- wget
ssh:
# For now we install openssh-server during package installs
allow-pw: true
install-server: false
storage:
layout:
name: direct
config:
- type: disk
id: disk0
match:
size: largest
- type: partition
id: boot-partition
device: disk0
size: 1024M
- type: partition
id: root-partition
device: disk0
size: -1
I've had the most success with the 2s boot_wait time, but it's still a crapshoot if it makes the window or not. I had it work one time last week and it hasn't worked for me since. It misses its window to enter the boot_command and ends up at the language selection screen of the installer.
I'm running into the same issue that @alexellis was running into where it's giving me the language selection screen. I can, however, see packer enter the boot_commands so I don't think I'm missing the window. Here are my boot commands:
"boot_command": [
"<esc><wait><esc><wait><f6><wait><esc><wait>",
"autoinstall ds=nocloud-net;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ ",
"<enter>"
],
I've manually tried to reproduce these steps but I keep getting the Welcome/Language selection screen.
My boot_wait
is set to 2s
.
I'm using the proxmox builder.
Any suggestions?
Feature Description
Ubuntu has switched its installer from the debian installer to subiquity. Starting with 20.04 no more alternate images will be provided: https://discourse.ubuntu.com/t/server-installer-plans-for-20-04-lts/13631
This breaks unattended packer builds (vmware-iso, virtualbox-iso, hyperv-iso) for Ubuntu. Starting with 20.04 no alternate images seem to be available that could be used until recently to build vagrant boxes with packer.
Use Case(s)
If unattended installations for ubuntu will be supported in the future, subiquity has to be used instead of d-i.