The current implementation has a limitation where all the pvc's in the namespaces a user is migrating needs to be provisioned by one and only one dynamic provisioner. For most use cases until now, glusterfs was assumed to be the dynamic provisioner on the source, and stage 3 calculated the source node path using this assumption.
In order to support a namespace with multiple pvc's from different storage provisioners, e.g. in a namespace foo half of the pvcs are provisioned by glusterfs and other half by gp2, pvc-migrate will have to figure out the provisioner of each pvc in stage 1.
This can be down by as follows:
In stage 1, look up the PV from pvc.spec.volumeName and find the annotation key pv.kubernetes.io/provisioned-by. This will have the plugin name that provisioned the PV, example:
It is possible that the PV is statically provisioned, if so the PV will not have the annotations. Hence if the annotation pv.kubernetes.io/provisioned-by is not found on the volume, stage one will have to look into the volume.spec. The volume.spec will have some indication of which plugins is used for this volume. For example, gp2 volumes will have a spec field awsElasticBlockStore. One can run oc explain pv.spec the exact mapping between a spec field and plugin name. Once this plugin name is identified, we can use this in stage 3 to find the source path.
The current implementation has a limitation where all the pvc's in the namespaces a user is migrating needs to be provisioned by one and only one dynamic provisioner. For most use cases until now, glusterfs was assumed to be the dynamic provisioner on the source, and stage 3 calculated the source node path using this assumption.
In order to support a namespace with multiple pvc's from different storage provisioners, e.g. in a namespace foo half of the pvcs are provisioned by glusterfs and other half by gp2, pvc-migrate will have to figure out the provisioner of each pvc in stage 1.
This can be down by as follows:
In stage 1, look up the PV from pvc.spec.volumeName and find the annotation key
pv.kubernetes.io/provisioned-by
. This will have the plugin name that provisioned the PV, example:It is possible that the PV is statically provisioned, if so the PV will not have the annotations. Hence if the annotation
pv.kubernetes.io/provisioned-by
is not found on the volume, stage one will have to look into the volume.spec. The volume.spec will have some indication of which plugins is used for this volume. For example, gp2 volumes will have a spec fieldawsElasticBlockStore
. One can runoc explain pv.spec
the exact mapping between a spec field and plugin name. Once this plugin name is identified, we can use this in stage 3 to find the source path.