aws / containers-roadmap

This is the public roadmap for AWS container services (ECS, ECR, Fargate, and EKS).
https://aws.amazon.com/about-aws/whats-new/containers/
Other
5.21k stars 316 forks source link

How to run "docker exec... " command in ECS #187

Closed shoaib-tarams closed 3 years ago

shoaib-tarams commented 8 years ago

Hi I have three containers that are running in ECS. But the website comes up only when we run "docker exec..." command. I can do this by login into the server and running this command. But this shouldn't be used. So my question is how to run "docker exec..." command without logging into the server. You can give solution in AMAZON ECS console or using ecs-cli or any other which you know. Since using ecs-cli command, we can make a cluster, tasks etc from our local machine. So how to run docker exec command from local machine into the containers.

jgardezi commented 5 years ago

Its will quite useful to SSH into AWS ECS Fargate container. As we need to run DB commands manually instead of adding the command to Dockerfile.

The only reason we are using ec2 container instances is we need to ssh into container instance and run docker command.

+1 for ssh in Fargate.

swordfish444 commented 5 years ago

+1 for ssh into Fargate. As a company running Rails there is a need for running an interactive rails console.

sterichards commented 5 years ago

+1 for ssh in Fargate.

This isn't a 'nice to have' this is a necessity

jhartman86 commented 5 years ago

+1... What everyone else said. Specifically the use case for running DB commands

dalegaspi commented 5 years ago

ok so AWS re:Invent came and went and i find it heartbreaking that Amazon isn't listening to us mere mortals.

Amazon is pushing everyone to put everything on AWS lambda where it's painfully obvious that it's not a replacement for anything Docker/Kubernetes-related...and then having ECS/Fargate not in parity with ECS/EC2 solutions when it comes to obvious features, like having more than 4 vCPUs per docker container and then this ability to SSH into the container with ECS/Fargate. hey, never mind that using Fargate is FAR MORE EXPENSIVE than equivalent EC2-based hosts...but nobody wants to talk about pricing AMIRITE?

as to a solution, i need to be able to SSH into some kind of "virtual host/server" thats in an ECS cluster (the "Fargate host" for the lack of a better term) where i can also apply the Security Group or "assign-to-a-subnet-of-my choice" magic to protect it (think Elastic Load Balancer security) and that i can do any docker command...i can do a docker ps and it will list all the docker processes/containers that's running on the Fargate cluster. so the Fargate is just "one big docker host/server" in this case that has some limited/sandboxed SSH access where you can only run docker commands.

mumoshu commented 5 years ago

The lack of this feature for Fargate seems to be blocking Fargate for EKS as well.

That is, there is a project called virtual-kubelet that works as an adapter between Kubernetes and Fargate. But only the cloudprovider that supports kubectl-exec(what's requested in this feature request) for serverless containers(like Fargate) is ACI/Azure as of today.

https://github.com/virtual-kubelet/virtual-kubelet/issues/106

Probably an addition to the AWS ECS API to allow us starting interactive sessions to containers running on either EC2-based/Fargate-based containers would help us all.

deleugpn commented 5 years ago

I don't fully understand this thread. The whole point of ECS/FARGATE is immutable deploy with unmanaged infrastructure. Having SSH on Fargate would be the worst feature Amazon could build. We might as well just go back to a simple EC2 with bash scripts for that. As for docker exec, you're not suppose to interact with your container. You can define the CMD / Entrypoint of your container and it will be executed on start-up, but other than that Fargate containers should be fully closed. That's an amazing feature that my company uses to explain to big enterprises how nobody can manipulate the software we deploy. Not even the owner of the software.

mumoshu commented 5 years ago

I personally agree that usually it should be disabled for production environments.

But I'd still love to see a kind of docker-exec via AWS ECS API for ease of debugging things running inside containers in pre-production environments.

patrickdizon commented 5 years ago

I’ve changed my view after using fargate for the last few months. I would not want ssh access for security reasons. Code execution should be done using a separate task with different builds using different ci/cd deployments. You can tag your ecr repo accordingly.

dalegaspi commented 5 years ago

the reason ssh access on fargate is only to be able to “ssh into” (spawn shell using docker exec) the container itself...so you can check something like if your container can connect properly to elasticache by using nc.

sure this can be done without docker exec and then just build multiple images to “troubleshoot” but isn’t that tedious?

lack of ssh for “security reasons”? you’re implying ecs with ec2 is less secure? that’s a load of baloney. you can secure your ssh access by limiting via network ACL, SG, using a bastion..among other things. having ssh access and securing ssh access are two different things.

deleugpn commented 5 years ago

Are you implying that monitoring / error handling tools are useless because you can just ssh + nc? Of course you're not implying that, but the same way you think my argument is 'baloney' on the grounds of ACL and SG, I think your argument is 'baloney' because you can use CloudWatch.

I'm not arguing that you cannot build safe containers with ECS EC2, but I am arguing that safety is not a concern for me anymore. It's like a door and a wall: You can make sure your door is properly secured and be responsible for that. In fact, your door can be as secure as a wall. It doesn't mean a wall is safer, it just means with a wall nobody need to think about safety.

If your container cannot connect to elasticache, check cloudwatch.

dalegaspi commented 5 years ago

@deleugpn i did not imply that monitoring tools are useless (and find that a bit overreaching), and that is not the focus of the argument. i stated a use case (checking connectivity to elasticache) for ssh into a container, perhaps a bad one.

ECS is built over Docker technology, and Docker allows you spawn an interactive shell with your container, and ECS Fargate does not allow you to do that and i question that lack of a feature and AWS can say it's for security reasons, but i highly doubt that.

I'm sorry, your analogy by using doors/walls doesn't resonate with me here; going by your analogy door and wall are the same thing. "it just means with a wall nobody need to think about safety." if somebody want's to get in a room and all you have is walls, you can bet that person will start testing how strong your wall is.

semenodm commented 5 years ago

I'm using FARGATE with JVM running inside. I see OutOfMemory error, usually i create heap dump and analyze it with MAT. Now, if i don't ssh access to the FARGATE instance, how can i extract heap dump? The only thing i see here, is attach volume to the docker and store heap dump there

tinyzimmer commented 5 years ago

@dalegaspi I agree with the sentiment but there is a good case to be made for security concerns in the functionality. It could have anything to do from how AWS is managing the networking of containers on the backend (you aren't getting dedis) to if I get into one Fargate Container is it possible I could stumble on a way to pivot into someone else's. It's managed compute, so you don't get to do everything you want, and yes, that's usually for security reasons.

I just don't think that should be a showstopper on providing the functionality. I think there could be ways to ensure isolation on the backend and expose an API that directs your request to a specific container ID. Even something like a "managed agent" similar to SSM where I can pass a command and then query the output (no interactive shells though).

An issue I had just not that would have made this helpful is I add an additional logging handler to get my logs into ES, and I didn't notice that this disabled the awslogs driver somehow. The ES logs stopped shipping for a separate reason and I couldn't see the logs in CloudWatch either (realized I had to manually add a StreamHandler after my other custom handler to get this to work as intended).

+1 to basically have been able to run a "ps" inside the container to know my application was actually alive first and a "cat" around a few other places.

WisdomPill commented 5 years ago

For running migrations and other "one time commands" before production can be easily ran using CodeDeploy or CodeBuild, as a matter of fact some of them can even be stored in a Lambda function or in the cmd of the dockerfile. For interactive shell, that's another pair of hands, I think I will keep some small instances inside the VPC I'm interested in. In some projects I have some tasks running in EC2 and I use those for interactive shells, but for the ones that are fully using Fargate having a separate EC2 instance only for scripting seems a little bit cumbersome. Now having used ECS for a few months it seems very strange to not have that option, something using the awscli would be amazing and a much desirable. Maybe adding a flag nearby public ip in configuration to be able to run exec commands.

ishikawam commented 5 years ago

+1 for Fargate. db migrate...

deleugpn commented 5 years ago

You can setup a Task Definition with the CMD as your db migrate command and then simply start the Task. It will start a container, run your command and shutdown. No need for Docker exec for that.

benjamin-cribb commented 5 years ago

My devs currently SSH into infrastructure and do things in our test environments (technically session manager). I've already had a false start trying to force them off shell access. I want the scalability advantages fargate provides but for certain workloads the inability to log in and tweak things is a huge time sink.

Fargate is not lambda, Containers are long running, they are stateful during their lifetime (and thus get into weird states). Startup devs lean heavily on tools like Rails console. Shell access is less secure, but there are well established ways of working with it and making it more secure.

I'm pretty sure Amazon is working on this... You guys who spin missing features as by design always get disappointed in the end.

whatch commented 5 years ago

Instances are cattle, not sheep. The process your tasks and service are meant to run either run, or do not, at which point ECS simply replaces the container. If you need another process to run, you just make a new task definition with a different command or entry point and you’re done. If you’ve set up the cluster and services correctly, there’s no good reason to have to ssh in to the instance, let alone the container. Anything you’d need to do that for would have been addressed during development. That’s the philosophy, anyway, and I’d bet heavily against AWS doing anything to change this. If you really insist on ssh access, then use EC2 clusters instead of Fargate.

On Mar 5, 2019, at 12:38 PM, benjamin-cribb notifications@github.com wrote:

My devs currently SSH into infrastructure and do things in our test environments (technically session manager). I've already had a false start trying to force them off shell access. I want the scalability advantages fargate provides but for certain workloads the inability to log in and tweak things is a huge time sink.

Fargate is not lambda, Containers are long running, they are stateful during their lifetime (and thus get into weird states). Startup devs lean heavily on tools like Rails console. Shell access is less secure, but there are well established ways of working with it and making it more secure.

I'm pretty sure Amazon is working on this... You guys who spin missing features as by design always get disappointed in the end.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

benjamin-cribb commented 5 years ago

I get it. It's still a waste of time to maintain my own library of things to run on my instances when the rails console already provides my devs with a comprehensive library of things they want to run. In some cases this makes it not worth the extra security.

jlmadurga commented 5 years ago

Most of use cases can be fulfilled with one-off containers. Just set a task to start, run desire command (migrate dbs, load data,...) and dies. The problematic use is to run one interactive bash container to run some interactive console commands (i.e queries). In other platforms as heroku is already solved with the sdk. Anybody knows how to start an interactive container from the aws cli?

FernandoMiguel commented 5 years ago

@jlmadurga terrible practice, but you ssh into your ec2 part of your ecs cluster, and run your Docker from there

jlmadurga commented 5 years ago

@FernandoMiguel I am planning to use Fargate that's why I want to run interactive container for some rare app shell tasks. Fargate do not handle infrastructure. I do not want to set ssh to my images. Heroku has it. https://devcenter.heroku.com/articles/one-off-dynos#connecting-to-a-production-dyno-via-ssh

jtatum commented 5 years ago

Session manager kind of obviates a lot of the security concerns presented here. A session-manager-like docker exec experience would be a troubleshooting dream, plus it's auditable using cloudtrail, uses IAM, logs sessions to cloudwatch... (PSA - if you're still using SSH and not session manager for accessing EC2 instances, you're missing out on the best new feature from AWS in a long time!)

apoca commented 5 years ago

I don't fully understand this thread. The whole point of ECS/FARGATE is immutable deploy with unmanaged infrastructure.

Yes, we all understand that! The question is... I don't want to change anything and I want those instances immutable, the question is how I should run commands (ex: artisan from laravel)!?

RoryKiefer commented 5 years ago

We didn't need fargate/ec2/docker to make software deployments immutable. Deploying to places with such little access that they're untroubleshootable has been possible even with conventional vms and deployments. It was just never implemented that way because it doesn't make any sense at all.

softprops commented 5 years ago

For those with bastion hosts running ecs on container instances

https://gist.github.com/softprops/3711c9fe54da673b1ebb53610aab4171

jtatum commented 5 years ago

@softprops consider eliminating ssh in favor of session manager (and the session manager aws cli plugin): https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html

softprops commented 5 years ago

Yep. That's next on my list. I just recently discovered that. Good stuff.

rothgar commented 5 years ago

session manager can't run a command at session connection. Is there a work around for that? I have other use cases I wanted to do this but couldn't.

eg ssh this works

ssh $HOST uptime

but in session manager it gives an error

aws ssm start-session --target $HOST uptime
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help

Unknown options: uptime
jtatum commented 5 years ago

Getting kind of off topic, but I think you're looking for systems manager documents. You define documents that run plugins. These documents can be applied to a list of instances, and the output can be retrieved. For some simple examples that use the predefined AWS-RunShellScript document, see https://docs.aws.amazon.com/systems-manager/latest/userguide/walkthrough-cli.html

If only these could be run against ECS (especially Fargate) services 😉

vrioux commented 5 years ago

+1

alex-mcleod commented 5 years ago

I've managed to successfully use SSM to get shell access to containers in our Fargate based ECS environment.

In our case, we just needed a way to run a Django Shell on a container which is running in production, so we created a specific service definition which starts a container that includes Django and the amazon-ssm-agent. We can then start a SSM Session via the AWS console to access that container and run commands.

We did the following to get this working:

It's a bit of a hack, but it works!

kennyjwilli commented 5 years ago

I'm also interested in this feature in order to be able to create a JVM thread dump on a running Fargate task. There does not seem to be a way to do that right now.

michaeldnelson commented 5 years ago

Yes you can catch things in dev and you can build and push additional containers to run one-off commands. The fact remains that even with those capabilities debugging anything on Fargate takes me 20x as long as a standard container that I can ssh into and view things. Production is not dev, maybe some people are lucky enough to have an exact replica of production to develop on, but I doubt that is true for the majority of people. +1 fargate ssh or something similar.

84nm commented 5 years ago

I have watched this issue for Fargate.

It is very inconvenient to debug problems that occur during the development process on Fargate.

@alex-mcleod 's workaround may be nice, but I hope that individual users will not be forced to hack just to connect to the container.

sandrom commented 5 years ago

I would also be interested in such a solution. Not that I would like to use it myself, but there are people who still require occasional shell debug access to prod. I know I know it's not cool, but that doesn't change the fact. Anyone came up with a solution that doesn't create a lot of stale resources or a way to clean them up nicely?

dtelaroli commented 5 years ago

+1

daya commented 5 years ago

+100

jz-wilson commented 5 years ago

I've been sshing into Fargate containers, I have a entrypoint script that if it is running in a non-prod environment install ssh and add the key. Then allow port 22 on the container and you can ssh directly to root. I also have this setup inside a VPC. I only use it for troubleshooting in dev or staging if we have something unexpected happen, never on prod.

karlskidmore commented 5 years ago

I've been sshing into Fargate containers, I have a entrypoint script that if it is running in a non-prod environment install ssh and add the key. Then allow port 22 on the container and you can ssh directly to root. I also have this setup inside a VPC. I only use it for troubleshooting in dev or staging if we have something unexpected happen, never on prod.

@jz-wilson Any chance you could share a snippet of your entrypoint script?

jz-wilson commented 5 years ago

I copy my key into the image on my Dockerfile:

### Allow SSH Access for debugging ###
#? If ENV is qa this ssh key will be used to gain access container
COPY .docker/debug_key.pub /root/.ssh/

This is what I use in the entrypoint:

if [[ ${ENV,,} != 'prod' ]]; then
    echo "Enabling Debugging..."
    apt-get update >/dev/null && \
    apt-get install -y vim openssh-server vim >/dev/null && \
    mkdir -p /var/run/sshd /root/.ssh && \
    cat /root/.ssh/debug_key.pub >> /root/.ssh/authorized_keys && \
    sed -i 's/prohibit-password/yes/' /etc/ssh/sshd_config && \
    chown -R root:root /root/.ssh;chmod -R 700 /root/.ssh && \
    echo "StrictHostKeyChecking=no" >> /etc/ssh/ssh_config
fi

Then for your ECS container you want to add the Environment Variable ENV. It should then allow you to ssh into the container as long as you have the port open.

dtelaroli commented 5 years ago

For SSH access maybe is better use AWS System Manager. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ec2-run-command.html

jz-wilson commented 5 years ago

@dtelaroli I think that is for going into unmanaged ECS hosts. My example was for Fargate. I should have clarified that.

michaelwood3 commented 5 years ago

+1

SidneyNiccolson commented 4 years ago

+1

wr0ngway commented 4 years ago

I've managed to successfully use SSM to get shell access to containers in our Fargate based ECS environment.

From what I gather (someone correct me if I'm wrong), when one tries to connect to a container using SSM Session Manager, it prompts to force you into the advanced tier pricing, which effectively charges you about $5 a month per running container for the ability to treat it as a "managed on-premise instance" and thus have SSM session manager be able to connect to it. https://aws.amazon.com/systems-manager/pricing/#Session_Manager

kutzhanov commented 4 years ago

Maybe it will be useful for someone. I had a request from the dev team to make possible to ssh into the containers that running in Fargate service (containers didn't have sshd) so they can see the application logs and change some config files. I managed to find a temporary solution by setting up an additional container with sshd in the same Fargate service and mount the volumes with logs and config files from the main container. So we can ssh into the additional container and see the logs and make changes to the main container.

deuscapturus commented 4 years ago

@kutzhanov That sounds like the right solution here. Could you be kind enough to share your container definition for "main" container and sshd container? Thank you

cruisemaniac commented 4 years ago

@kutzhanov This is definitely a hacky but working solution. It looks like at this point, we're not going to be able to run docker exec -it <containerID> /bin/sh on fargate.

@nathanpeck this is something we had a quick chat about on twitter. I understand that the entire premise of the fargate service is to be hands off completely. But there will always be non-ideal situations where looking AT the code and running one off tasks becomes critical. If I need to run a container with a command everytime I want this taken care of, it becomes more than cumbersome.

For me, my current poison is rails. Being able to get into a running container and go RAILS_ENV=production bundle exec rails c becomes a very very important ask.