:thinking_face: find it hard to see how that approach works without downtime as you have to detach the volume from the old instance serving traffic and then attach/mount it to the new instance...
pre_tasks:
- name: format the extra drive
filesystem:
dev: /dev/xvdb
fstype: ext4
- name: mount the extra drive
mount:
name: /secondary
# ubuntu renames the block devices to xv* prefix
src: /dev/xvdb
fstype: ext4
state: mounted
How to swap block storage with an auto-scaling group without downtime?
looking like the suggested approach is to use
aws_volume_attach
in terraform to reattach the persistent volume to the new AMI and use an inline provisioner to perform the mount - https://github.com/hashicorp/terraform/pull/2050#issuecomment-125229024.:thinking_face: find it hard to see how that approach works without downtime as you have to detach the volume from the old instance serving traffic and then attach/mount it to the new instance...
Via Terraform inline provisioner
https://www.terraform.io/docs/provisioners/remote-exec.html
Via Python?
https://github.com/dizzythinks/asg_persistence/blob/master/attach_volume.py https://cloudinit.readthedocs.io/
Via Ansible?