kubernetes-sigs / aws-efs-csi-driver

CSI Driver for Amazon EFS https://aws.amazon.com/efs/
Apache License 2.0
710 stars 544 forks source link

Does the EFS CSI Driver work with an EFS in a different vpc-peered AWS account? #84

Closed JustinPlute closed 4 years ago

JustinPlute commented 5 years ago

I see examples with just the volumeHandle as the EFS ID, but would imagine you'd need the configuration to supply the EFS mount IP address?

Per https://docs.aws.amazon.com/efs/latest/ug/manage-fs-access-vpc-peering.html, "You can't use DNS name resolution for EFS mount points in another VPC. To mount your EFS file system, use the IP address of the mount points in the corresponding Availability Zone."

leakingtapan commented 5 years ago

Currently the driver uses DNS name for EFS mount point resolution. Does the alternative from the aws doc works for you?

Alternatively, you can use Amazon Route 53 as your DNS service. In Route 53, you can resolve the EFS mount target IP addresses from another VPC by creating a private hosted zone and resource record set. For more information on how to do so, see Working with Private Hosted Zones and Working with Records in the Amazon Route 53 Developer Guide.

JustinPlute commented 5 years ago

No. I'll be using fs-12345678 as an example of an EFS in another AWS account. Following the instructions, when I try to create a private hosted zone (eg., fs-12345678.efs.us-west-2.amazonaws.com), I get the following error:

The VPC that you chose, vpc-2f09a348 in region us-west-2, is already associated with another private hosted zone that has an overlapping name space, efs.us-west-2.amazonaws.com..

I then create a private hosted zone not using amazonaws.com (e.g., fs-12345678.efs). I then create an A Record in this private hosted zone with the value of an EFS Mount IP Address in the other AWS account. After this step, I use fs-12345678.efs as the VolumeHandle.

Once I do this, however, I get the following error as a Pod Event:

  Warning  FailedMount             58s (x8 over 2m4s)  kubelet, ip-10-39-205-140.us-west-2.compute.internal  MountVolume.SetUp failed for volume "efs-pv" : rpc error: code = Internal desc = Could not mount "fs-12345678.efs:/" at "/var/lib/kubelet/pods/d2c63940-d834-11e9-9c70-0a2802efa0fc/volumes/kubernetes.io~csi/efs-pv/mount": mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t efs fs-12345678.efs:/ /var/lib/kubelet/pods/d2c63940-d834-11e9-9c70-0a2802efa0fc/volumes/kubernetes.io~csi/efs-pv/mount
Output: The specified CNAME "fs-12345678.efs" did not resolve to a valid DNS name for an EFS mount target. Please refer to the EFS documentation for mounting with DNS names for examples: https://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html

This error led me to this GitHub comment, https://github.com/aws/efs-utils/issues/21#issuecomment-449425418. The EFS mount helper requires you're using the Amazon-provided DNS at this time. As a workaround, you can add a host entry (mapping your file system ID to the IP of a desired mount target) on your client.

I'll be testing now to add an entry to /etc/hosts:

192.0.2.0 fs-12345678.efs.us-west-2.amazonaws.com

But I'd love to not have to do this.

wreed4 commented 4 years ago

I will be in the same pickle. Can we get an answer if there is a more supported way to do this than to edit the AMI or launch config of the nodes?

JustinPlute commented 4 years ago

I have not tested mounting yet via the EFS CSI Driver, but this announcement last week should now allow the creation of private hosted zone with an overlapping namespace, e.g., <efs-id>.efs.<region>.amazonaws.com. Then ideally all we'd need to do is put an A record of the EFS mount targets.

https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-route-53-now-supports-overlapping-namespaces-for-private-hosted-zones/

JustinPlute commented 4 years ago

I was able to test the above and with AWS PrivateLink, not a VPC peer. Works!

aruizramon commented 4 years ago

@rplute could you share some details (please and thank you)? I'm trying to get it working for VPC peers.

However, I'm seeing a timeout for the volume mount.

  Warning  FailedMount  84s    kubelet, ip-xxxx.ec2.internal  Unable to mount volumes for pod "efs-provisioner": timeout expired waiting for volumes to attach or mount for pod "default"/"efs-provisioner-f64c9fb6d-7r5k4". list of unmounted volumes=[pv-volume]. list of unattached volumes=[pv-volume efs-provisioner-token-plq7n]
leakingtapan commented 4 years ago

@aruizramon Does it work if you mount the EFS file system using efs mount helper?

boblozano commented 11 months ago

Realize this is a stale thread, but in case anyone else runs across it unresolved here is how got it to work with peering. In addition to the steps @aruizramon outlined,