aws-cloudformation / cloudformation-coverage-roadmap

The AWS CloudFormation Public Coverage Roadmap
https://aws.amazon.com/cloudformation/
Creative Commons Attribution Share Alike 4.0 International
1.11k stars 54 forks source link

AWS::ECS::TaskDefinition - ECS Fargate mount EFS from VPC peering connection #741

Closed git-josip closed 2 years ago

git-josip commented 3 years ago

Quick Sample Summary:

  1. Title -> AWS::ECS::TaskDefinition EFSVolumeConfiguration
  2. Scope of request -> AWS::ECS::TaskDefinition EFSVolumeConfiguration does not support mounting of EFS from peered connection using ECS Fargate. DNS name of EFS volume can not be resolved when mounting EFS in task.
  3. Expected behavior -> There should be a way to mount EFS from peered VPC .
  4. Test case recommendation (optional) ->It is important to test this with ECS Fargate.
  5. Links to existing API doc (optional) -> https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-efsvolumeconfiguration.html
  6. Category tag (optional) -> ECS Fargate

1. ECS Fargate mount EFS from VPC peering connection

In the format AWS::Service::Resource-Attribute-Existing Attribute) Samples:

AWS::ECS::TaskDefinition EFSVolumeConfiguration

2. Scope of request

As I mentioned before. When I specify EFS fileSystemId of EFS which is in peered VPC and not in one where task belongs then I got error that fileSystemId is unknown and DNS can not be resolved. e.g. fileSystemId: test region: eu-central-1 DNS is then: test.efs.eu-central-1.amazonaws.com

In EFS instructions is written that DNS of EFS in peered connections can not be resolved and we need to use private Ip of EFS. AWS EFS mount from anouterh VPC: https://docs.aws.amazon.com/efs/latest/ug/manage-fs-access-vpc-peering.html

I tried then to use extraHosts so that I manually add record in in container /etc/hosts but there is no luck as ECS Fargate can use only networkMode: awsvpc, which does not support extraHosts. error I get is following: Error: ClientException: Extra hosts are not supported on container when networkMode=awsvpc.

So there is no way of mounting EFS from peered VPC using just fileSystemId.

my proposition is then to support to mount disk by just using IP without fileSystemID for ECS Fargate tasks.

It would be good to add ip param in https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_EFSVolumeConfiguration.html which can be used if fileSystemId is not specified.

amanbedi23 commented 2 years ago

currently facing the same issue

rcollette commented 1 year ago

Why was this issue closed as shipped? I do not see an IP address ability documented in the API mentioned in this issue nor in cloud formation documentation, and the extraHosts option still says that it does not support awsvpc network mode.

ykachube commented 1 year ago

facing same issue

rfossicav commented 1 year ago

I received an AWS Support response today (I was asking about cross-account EFS mounting using the taskdefinition) and was toldthis feature is not available for FARGATE launch types and a current feature request is active in the containers roadmap [1] (the link provided is for roadmap#901 as rcollette posted above). This one was closed in favor of the older issue in the containers-roadmap (which covers not just peering scenario, but cross-account EFS mounting in general)... but they really should have a better workflow state than shipped on this. When I initially ran into this issue, I thought it was an available feature already 😑

Meanwhile, the [summarized] process as provided by AWS Support today is:

  1. Establish a VPC peering connection between VPC A and VPC B. You can also setup a VPC Transit Gateway.
  2. Please also ensure amazon-efs-utils set of tools is installed on the EC2 instance.
  3. Determine the Availability Zone ID of the EFS Mount Target. It is recommended that EFS mount target IP addresses is used which is in the same availability zone as the NFS client, if the EFS file system is in a different account than the EC2 container instance, please ensure the EFS mount target and the NFS client are in the same Availability Zone ID. aws ec2 describe-availability-zones Then determine the mount target IP address once we have determined the AZ id of the EC2 instance. ws efs describe-mount-targets --file-system-id file_system_id
  4. The next step is to add a host entry for the mount target in the etc/hosts file on the EC2 container instance that maps the mount target IP address to your EFS file system's hostname. Further instructions to perform this, could be found in the document [5] or you can achieve this by modifying the user-data of the EC2 container
  5. The next step is to launch a task definition with EFS as storage and mount it on the container using the usual EFS ECS filesystem tutorial [6].
ghost commented 1 year ago

Hello everyone, I have just successfully mounted ECS task EC2 from EFS in another region via VPC peering. First of all, make sure ECS instances are able to connect to EFS. Verify again somehow by netcat mount target's private IP address port 2049

Edit the efs-utils config file for the purpose of always returning desired EFS DNS. [1] For example: In my case, I have EFS ap-northeast-1 region and ECS task in ap-southeast-1. With fileSystemId fs-0ab123 always return fs-0ab123.efs.ap-northeast-1.amazonaws.com Add a host entry for the cross-region mount [2] I need to update commands for userdata for ECS instances

sed -i "s/#region = us-east-1/region = ap-northeast-1/" /etc/amazon/efs/efs-utils.conf
echo "10.0.12.112 fs-0ab123.efs.ap-northeast-1.amazonaws.com" | sudo tee -a /etc/hosts
FrederiqueRetsema commented 11 months ago

@rcollette: I solved this issue by using a split-horizon DNS: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-considerations.html#hosted-zone-private-considerations-split-view-dns . Fortunately you can use this for AWS network names as well: add a hosted zone with the name fs-01234567890.efs.eu-west-1.amazonaws.com, then put an A-adresses (also fs-01234567890.efs.eu-west-1.amazonaws.com) in it to route to the mount points in the peered VPC.

ramarahul commented 9 months ago

@FrederiqueRetsema , I did not follow the A-address thing that you mentioned. What should be the A record key and value for this setup to work? Also did you create a public hosted zone or a private hosted zone? If you've created a private hosted zone did you associate it with the VPC in which EFS exists or the other?

ramarahul commented 8 months ago

Thanks @FrederiqueRetsema for the split-horizon DNS solution. It worked perfectly for our use case where we were trying to mount a multi-AZ EFS in VPC B to our Fargate Service/task in VPC A given that VPC A & VPC B are peered.

Sharing below the CDK code that worked for us in case anyone is still facing the same issue.

Please note the place holders {VPC A}, {region}, ["x.x.x.x", "x.x.x.x"] and fs-xxxxxxxx in the code. Do replace them with appropriate values while implementing it on your side.

const privateHostedZone = new PrivateHostedZone(this, "PrivateHostedZone", {

  vpc: {VPC A},

  zoneName: "efs.{region}.amazonaws.com",

});

// The private IP addresses of EFS for AZs

const efsIpAddresses = ["x.x.x.x", "x.x.x.x"];

const aRecord = new ARecord(this, "ARecord", {

  recordName: "fs-xxxxxxxx", //file system Id

  target: RecordTarget.fromIpAddresses(...efsIpAddresses),

  zone: privateHostedZone,

});
alex-ruehe commented 4 months ago

I think this solution only works if you do not have any EFS with a mount target in VPC-A. Otherwise there will be a private hosted zone owned by EFS Service for the same zoneName.