Open revmischa opened 3 years ago
Hi @revmischa.
Have you looked at this issue? https://github.com/aws/containers-roadmap/issues/825. I believe it details your use case and offers a working configuration.
Could you also attach your code that initializes taskDef
?
This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.
The documentation only mentions how to mount an EFS Volume but not an EFS AccessPoint. It would be nice to mount an access point in a fargate task definition. Also I don't see anything about the permissions error - how is one supposed mount and use a freshly created (with CDK) volume to a container if it doesn't have write permission?
The task definition is here: https://github.com/jetbridge/lemmy-cdk/blob/master/lib/lemmy/ecs.ts#L65
Working on the same thing myself... Finally got everything working but now I'm stuck with the same permissions issue you mention (related to the createAcl; I tried to add the user/group POSIX user to the container to no avail). Apparently there's no way to attach an IAM policy to the EFS volume via the CDK as I am not seeing it anywhere on the documentation. I think it would be highly preferable to grant write permissions to the task execution role...
private mountDirectory(
volPath: string,
containerPath: string,
fileSystem: FileSystem
) {
const sourceVolume = `efs-volume-${volPath}`
const rootDirectory = `/${volPath}`
const ap = fileSystem.addAccessPoint(sourceVolume, {
path: rootDirectory,
createAcl: {
ownerGid: '1000',
ownerUid: '1000',
permissions: '700'
},
})
this.taskDefinition.addVolume({
name: sourceVolume,
efsVolumeConfiguration: {
fileSystemId: fileSystem.fileSystemId,
transitEncryption: 'ENABLED',
authorizationConfig: {
accessPointId: ap.accessPointId
}
},
});
this.container.addMountPoints({
containerPath,
sourceVolume,
readOnly: false,
});
}
This seems to me like it's a CloudFormation limitation, there is no EFS::Volume
. The only functionality I can find with EFS Volumes is in ECS::TaskDefinition.Volumes
, which @JoelVenable has shown how to use with the taskDefinition.addVolume()
method.
Here's the documentation on TaskDefinition volumes in CFN. You might notice that this is identical to the CDK Volume
which you add with the addVolume()
method. So, using it as described above (with authorizationConfig.accessPointId
) allows you to select the access point to use
Regarding permissions - the only thing I can see regarding permissions is in authorizationConfig.IAM
. According to CFN, this property
Determines whether to use the Amazon ECS task IAM role defined in a task definition when mounting the Amazon EFS file system
Have you tried using this? I'm not very familiar with these services. If this functionality doesn't suit your needs - I suggest creating a feature request in the CFN Coverage Roadmap
Looking over the ECS documentation for CloudFormation, it appears MountPoint is now available, it is also present in the UI when using AWS console.
I've manually added the EFS volume and mounted it using the Task Definition JSON, and it appears to work now!
Is there a way for me to use this functionality within the CDK today? It seems my only work around is to use CDK for creating EFS volume and enabling network access, then manually add the mountPoint using CloudFormation. Obviously this will break on every deployment, but I need EFS for my use case unfortunately.
First, ensure you are using an up-to-date CDK version for the latest features.
I didn't wind up going this direction so I don't have a working example, but it looks like the pathway is:
addVolume
method on the TaskDefinition, and provide an efsVolumeConfiguration
.addMountPoints
method on the ContainerDefinition, ensuring the name
property is the same between the two methods.Since this was frustrating for me and I don't see the full solution here, here's what worked for me. Permission needs to be added and access point mounts needed to be created. See example below:
mountDirectoryToContainers() {
const sourceVolume = "SdSourceVolume";
const sourcePath = "/data";
const fileSystem = new FileSystem(this, `${APP_NAME}FileSystem`, {
vpc: this.vpc,
encrypted: true,
lifecyclePolicy: LifecyclePolicy.AFTER_14_DAYS,
performanceMode: PerformanceMode.GENERAL_PURPOSE,
throughputMode: ThroughputMode.BURSTING,
removalPolicy: RemovalPolicy.DESTROY
});
fileSystem.connections.allowDefaultPortFrom(this.ecsService.connections);
const efsAccessPoint = fileSystem.addAccessPoint('AccessPoint');
efsAccessPoint.node.addDependency(fileSystem);
const efsMountPolicy = (new PolicyStatement({
actions: [
'elasticfilesystem:ClientMount',
'elasticfilesystem:ClientWrite',
'elasticfilesystem:ClientRootAccess'
],
resources: [
efsAccessPoint.accessPointArn,
fileSystem.fileSystemArn
]
}))
this.taskDefinition.addToTaskRolePolicy(efsMountPolicy)
// This policy permission is probably not necessary.
this.taskDefinition.addToExecutionRolePolicy(efsMountPolicy)
this.taskDefinition.addVolume({
name: sourceVolume,
efsVolumeConfiguration: {
fileSystemId: fileSystem.fileSystemId,
transitEncryption: 'ENABLED',
authorizationConfig: {
accessPointId: efsAccessPoint.accessPointId,
}
},
});
this.downloadContainer.addMountPoints({
containerPath: sourcePath,
sourceVolume,
readOnly: false,
});
this.inferenceContainer.addMountPoints({
containerPath: sourcePath,
sourceVolume,
readOnly: false,
});
}
Since this was frustrating for me and I don't see the full solution here, here's what worked for me. Permission needs to be added and access point mounts needed to be created. See example below:
mountDirectoryToContainers() { const sourceVolume = "SdSourceVolume"; const sourcePath = "/data"; const fileSystem = new FileSystem(this, `${APP_NAME}FileSystem`, { vpc: this.vpc, encrypted: true, lifecyclePolicy: LifecyclePolicy.AFTER_14_DAYS, performanceMode: PerformanceMode.GENERAL_PURPOSE, throughputMode: ThroughputMode.BURSTING, removalPolicy: RemovalPolicy.DESTROY }); fileSystem.connections.allowDefaultPortFrom(this.ecsService.connections); const efsAccessPoint = fileSystem.addAccessPoint('AccessPoint'); efsAccessPoint.node.addDependency(fileSystem); const efsMountPolicy = (new PolicyStatement({ actions: [ 'elasticfilesystem:ClientMount', 'elasticfilesystem:ClientWrite', 'elasticfilesystem:ClientRootAccess' ], resources: [ efsAccessPoint.accessPointArn, fileSystem.fileSystemArn ] })) this.taskDefinition.addToTaskRolePolicy(efsMountPolicy) // This policy permission is probably not necessary. this.taskDefinition.addToExecutionRolePolicy(efsMountPolicy) this.taskDefinition.addVolume({ name: sourceVolume, efsVolumeConfiguration: { fileSystemId: fileSystem.fileSystemId, transitEncryption: 'ENABLED', authorizationConfig: { accessPointId: efsAccessPoint.accessPointId, } }, }); this.downloadContainer.addMountPoints({ containerPath: sourcePath, sourceVolume, readOnly: false, }); this.inferenceContainer.addMountPoints({ containerPath: sourcePath, sourceVolume, readOnly: false, }); }
still getting a permission error.
How do I essentially say chmod -R 777
on the efs file system?
@danieloi probably you have already solved the issue but I will post a solution that works for me. Most likely SecurityGroup
with NFS port and AccessPoint
posix user are missing in your configuration. So the complete example will be as following:
const securityGroup = new ec2.SecurityGroup(this, "security-group", {
securityGroupName: "cdk-efs-lnd",
vpc: properties.cluster.vpc,
allowAllOutbound: true,
})
securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(2049), "cdk-efs")
// EFS File System
this.fileSystem = new efs.FileSystem(this, 'lnd-efs', {
fileSystemName: "cdk-lnd-nodes",
vpc: properties.cluster.vpc,
encrypted: true,
// NB: we can destroy EFS here because this is a test container
removalPolicy: RemovalPolicy.DESTROY,
lifecyclePolicy: efs.LifecyclePolicy.AFTER_7_DAYS,
performanceMode: efs.PerformanceMode.GENERAL_PURPOSE,
throughputMode: efs.ThroughputMode.BURSTING,
securityGroup: securityGroup
})
// Allow access to EFS from Fargate ECS
this.fileSystem.grantRootAccess(this.taskRole.grantPrincipal);
// Access Points
const aliceAccessPoint = new efs.AccessPoint(this, 'alice-access-point', {
fileSystem: this.fileSystem,
path: AliceVolume.path,
// TODO reduce permissions to required?
createAcl: {ownerUid: '1777', ownerGid: '1777', permissions: '777'},
posixUser: {uid: '1777', gid: '1777', secondaryGids: []},
})
this.addVolume({
name: AliceVolume.name,
efsVolumeConfiguration: {
fileSystemId: this.fileSystem.fileSystemId,
transitEncryption: "ENABLED",
authorizationConfig: {
accessPointId: aliceAccessPoint.accessPointId,
iam: "ENABLED"
}
}
})
this.lndAliceContainer.addMountPoints({
sourceVolume: AliceVolume.name,
readOnly: false,
containerPath: "/root/.lnd"
})
First, thanks all to code snippets above, very helpful to me. Wanted to add, if you're placing your ECS Containers / EFS File System in a subnet without egress (no NAT Gateway), make sure to add the EFS VPC Interface Endpoint!
this.vpc.addInterfaceEndpoint(
"EfsVpcEndpoint",
{
service: InterfaceVpcEndpointAwsService.ELASTIC_FILESYSTEM,
},
);
I want to be able to create an EFS volume and mount it in a docker container with CDK.
I tried figuring out how to use an Access Point but can't figure out how to specify the AP ID in a
Volume
definition. Don't see anything about access points in ECS docs. I tried creating an EFS volume and mounting it but I can't write to it and I have no way to change the permissions. I'm using a community docker image I'd rather use as-is without modifying.Surely this is a common use case - create a writeable persistent volume and mount it.
Use Case
I'm building this - https://github.com/jetbridge/lemmy-cdk
Proposed Solution
A way to mount an EFS access point in a fargate task definition or a way to mount the entire EFS volume as writeable.
Other
This is a :rocket: Feature Request