Closed itsouvalas closed 1 year ago
Do we know why it was removed in the first place? Is this a CPI issue where vsphere CPI behaves differently? Should this only be done on vsphere deployment targets?
The persistent disks link provided doesn't differentiate between cpis, as in, the 'persistent_disk' key is applied throughout the Bosh deployments irrelevant to the CPI being used and is responsible for identifying a disk as persistent.
A note on that same link reiterates that:
If you terminate or delete a VM from your IaaS console, the fate of the persistent disk depends on the IaaS provider. For example, in AWS, the default behavior is to keep the persistent disk when you delete a VM.
That said, although I haven't tested it on AWS, the documentation suggests that, with the absence of persistent_disk
, Bosh will consider it as ephemeral and subsequent deployments or a VM replacement by Bosh's health monitor, should essentially result in the volume being scrapped and repopulated by the stemcell alone.
As far as the "why" it was replaced in the first place, I believe that at the time the requester had mixed disk_type
and it's ability to select a disk size from a cloud config, with the actual type of the disk, which in this case is meant to be persistent, aptly named persistent_disk
.
The changes look good, this brings disk to be consistent with the rest.
Using
disk_type
instead ofpersistent_disk
although allows the selection of the disk size from cloud config, it doesn't declare the required attribute of the persistent nature of the disk. As a result, the root volume is used for the predefined/var/vcap/store
which is meant to be mounted to the persistent disk leading it in that way to run out of space.:reinstating
persistent_disk
adds an additional volume which is rightfully mounted under/var/vcap/store
. This time around the root/
volume remains at reasonable usage level. This is easier to be noticed on vmare's CPI where the root volume is 3GB. :