Closed shoaib-tarams closed 3 years ago
+1
+1
can't you add your exec command to the dockerfile?
I'd like to run my rake db migrate task and not sure what is the most elegant way to go about it. It should runs only the first time when creating the cluster to create the database and seed it with test data
+1
+1
@VinceMD docker exec could be a valid use case when you want to update some code(git pull) in the container after image is built. Writing exec in dockerfile would require re-build the image, push the image, restart ECS task...
+1
+1
I would like this functionality. Researching various secrets injection solutions where the container wouldn't have to be modified to include aws tools
I built a tool ecsctl to do this. However, you will need to customize docker daemon configuration on container instances to listen on a port.
@cxmcc what about security? How do you secure tcp port of Docker daemon?
@panga Currently with networking configurations.
externally only open this port to trusted network (vpn/bastion, etc)
internally run a iptables rule to drop traffic going to that port from containers:
iptables --insert INPUT 1 --in-interface docker+ --protocol tcp --destination-port MYDOCKERPORT --jump DROP
Alternatively I believe using a tls cert may be possible, but I have not tried out.
So...,if I am not wrong, right now it is not possible to run commands over running tasks, like "docker exec" way, isn´t it?
@destebanm It's certainly possible to docker exec
into a container that's running as part of an ECS task, but you currently need to identify the specific instance and container manually using ECS and Docker tooling and log in to the appropriate instance. This is clearly suboptimal, and we're tracking this as a feature request.
I'd be interested in hearing ideas for how this might work. What would be the ideal workflow around docker exec
? Would people prefer that it be integrated into the web console, such that you can identify a task with the UI and get an interactive docker exec
environment in the browser? Or would CLI integration be better?
On the ECS Agent, you could marshall the unix socket to a TCP endpoint. This endpoint would need to be authed with a IAM token so that the console that is connecting to the socket is authenticated for a short period of time.
The best way of getting a quick shell would be from within the ECS Console. You could right click on the task itself, then you could open up a sh
to the container. Otherwise you're using the cli to list the containers, get statistics, yada yada. It just seems more simple to look at the metrics of a service, then go into a container that way. Docker-cloud did this integration awhile back, but it works great and it's a simple way to get into your container to do a quick ls
or curl
to a database.
But you could also do a CLI integration that would behave in much of the same way, by connecting your docker cli to that specific socket via an AWS API. I might be missing something so please fill in the gaps if you have ideas!
I just want to say that I would love to be able to exec into a running ecs container from my Macbook terminal.
It would make debugging so much quicker and easier.
+1
I am working in a development ECS cluster with EC2 instances that another developer built using his own key pair. Therefore I can't ssh into the instance to run 'docker exec...'. It would be great if something was made available to do this.
@nmeyerhans Not sure if there's a better issue/repo to discuss being able to exec into an ECS task container but this seems to be the best I can find for now.
I was considering spending some time writing an ECS executor for Gitlab Runner that would allow people to run CI jobs as one off ECS tasks but the Gitlab Runner model for both Docker and Kubernetes is to run a container and then exec into it so it can receive the output easily.
I was thinking about seeing if I could hack something together where it overrides the command each time with the script lines concatenated together and then try to read the logs out of Cloudwatch log but it's horribly ugly and the delay on fetching the logs is probably going to be impractical, let alone not being able to support things like after_script
(although that's less needed for my use cases right now).
If being able to exec into an ECS task container was possible then I think an ECS executor for Gitlab should be easy enough to write and would be a real benefit for my company. Coupled with Fargate that would be a really, really interesting way of running our CI workloads. That said, I'm also considering just waiting for EKS access and then moving to Kubernetes executors as that's the least work to get this off the Docker-Machine runners I'm using. I expect that will probably be the thing that moves me from ECS to Kubernetes for production services as well although I do prefer the relative simplicity of ECS to k8s.
@nmeyerhans when you say:
It's certainly possible to docker exec into a container that's running as part of an ECS task, but you currently need to identify the specific instance and container manually using ECS and Docker tooling and log in to the appropriate instance.
Can you explain how to do that? That would suffice for me as a workaround...
@harlantwood just ssh into your ECS instance and run docker exec..
ssh ec2-user@my-ecs-server
docker ps
docker exec -it 34cfe4c6b6d5 sh
That works perfectly when doing ECS/EC2
How about when doing ECS/Fargate? Is it possible? With Fargate you don't have access to the host machine at all
+1
+1 for Fargate
+1 for Fargate
+1 for Fargate
+1 for Fargate
+1 for Fargate
+1
+1 for Fargate
+1 for Fargate
For Fargate, has anyone had luck opening ssh access to the container? Yes I do believe that would require an image with sshd2 and a known key (not ideal!), and opening port 22.
+1 for a Fargate solution. Can't open port 22 and allow ssh. (company policy)
For AWC ECS using EC2 cluster, we can access container by doing SSH on EC2, But how can I access the container in Fargate mode?
+1 for Fargate
+1 for a Fargate ssh access!
+1 for Fargate
++Fargate
+1 on Fargate i can't believe this feature is missing from the get-go. 😐
For Fargate, has anyone had luck opening ssh access to the container? Yes I do believe that would require an image with sshd2 and a known key (not ideal!), and opening port 22.
@enthal I have been able to do this in Fargate. The process is the same as with opening any other TCP port (Dockerfile, container settings, and security group).
@ JamesRyanATX great. How did you manage keys in practice? Making it possible is not the same as making it secure (without making it cumbersome). Did you do anything other than bake the private key into the docker image? Thanks! :)
@enthal you just want to SSH into the container, right? If so, then your Docker image only needs the public key. Your private key is used in the handshake as normal.
+10 for Fargate though I dont even use Fargate
I would be awesome if we could do this from the SDK:
const instances = await ecs.listContainerInstances({ cluster }).promise();
const arn = instances.data.containerInstanceArns[0];
const { stdout, stderr } = await ecs.exec(arn, '/bin/ps', ['aux']).promise();
Something like that...
I think there's a general need to be able to run a command against all the running tasks in a service. I think it would be ideal to extend the config service to be able to do this. There's times when all I want is for the running service task to refresh a configuration, for example, The most efficient way to do this now, and which is not reliable in my opinion, and totally overkill, is to update the service. I say not reliable because updating the service does NOT absolutely replace all running tasks. I've consistently gotten flaky results with this, to the point where I don't even bother with it anymore. I will first kill the tasks, by hand, then update the service. Yeah, that needs to be fixed, too, such that we can confirm how long a task has been running would be ideal.
+1 for Fargate
Bumping up into this issue as well. +1 for a good solution
+1
+1
+1
+1
Hi I have three containers that are running in ECS. But the website comes up only when we run "docker exec..." command. I can do this by login into the server and running this command. But this shouldn't be used. So my question is how to run "docker exec..." command without logging into the server. You can give solution in AMAZON ECS console or using ecs-cli or any other which you know. Since using ecs-cli command, we can make a cluster, tasks etc from our local machine. So how to run docker exec command from local machine into the containers.