aws / containers-roadmap

This is the public roadmap for AWS container services (ECS, ECR, Fargate, and EKS).
https://aws.amazon.com/about-aws/whats-new/containers/
Other
5.21k stars 317 forks source link

Expose AWS credentials on a per-container basis #204

Open dinvlad opened 7 years ago

dinvlad commented 7 years ago

In reference to #346, I'd like to create a feature request to make credentials even more granular.

Currently, all containers within a single task definition have to share the same set of permissions, so we cannot really use multi-container task definitions in sensitive scenarios. That severely limits some applications, e.g. if we want to pair each "application" container with a co-pilot "service" container on the same host. I.e. instead of hosting an actual ECS service shared by multiple "application" containers, we would like to run one "service" container per each "application" container, with different permissions for each container in the pair. An example of that could be a custom logger, or an S3 management container, or a DB instance, paired with a client-facing application, where each application container and the corresponding "service" container should be completely isolated from others, while running on a dedicated bridge network, and with sensitive permissions granted only to the backend. In that case, currently we have to split them in separate tasks, put them on the host network (which provides less isolation) and write a custom scheduler that would place both tasks on the same EC2 instance.

Of course, there are workarounds to that, e.g. running a single pair of tasks per each EC2 instance. Still, they may introduce an additional layer of complexity and/or unnecessary constrains on the instance size and the total number of instances (limited on the account level) .

It seems that adding "container role ARN" as an option to "container overrides" would address this problem without changing the current API. This option could be used in isolation for each container when no task role is configured, so no conflicts with the latter would come up. Or perhaps it could create a superset of the 2 roles, much like currently task role policies apply in addition to instance role policies. Or maybe it could even override the task role policies, though that would require more work to implement while preserving compatibility with the current behavior.

samuelkarp commented 5 years ago

What sorts of things would you want to share between these "application" and "service" containers and what sorts would you want to have isolated? From your initial description, it sounds like you want separate roles/permission boundaries as well as separate networks, but I'd love to know more.

The reason I ask this is that we've traditionally considered the task to be the isolation boundary for workloads, which is why we have a task-level role, task-level networking with awsvpc network mode, and can share other resources like volumes and namespaces (ipc and pid) between containers in a task. The other aspect of a task is co-location, and it might be useful to model co-location as a construct of multiple tasks (a "task group", possibly) rather than attempting to introduce additional boundaries within a single task.

jeichorn commented 5 years ago

We have a similar use case. We are using sidecars to provide services to the main service.

Logging, Metrics, Configuration delivery. Shared volumes for co-location is the minimum requirement to do it with multiple tasks. Though it would be easiest if we could also use a different docker network for each task group.

danieljamesscott commented 5 years ago

We would like to use this to have an initialisation container which runs with elevated privileges to obtain an authentication token. This token is then passed to the long-running container with reduced privileges.