Closed ms-ati closed 9 years ago
Hello @ms-ati. I believe this issue is the same as the one in this forum thread.
Per my response there, can you try either restarting the docker daemon after making a mount change (sudo stop ecs && sudo service docker restart && sudo start ecs
) or launching docker without the unshare -m
command?
Best, Euan
Edit: To add a little more information, I believe the unshare
is there in the first place to work around an issue in docker (see issue and workaround). Please note that if you remove the unshare
command rather than restarting docker to propagate mount changes, there might be issues removing stopped containers.
@euank Thank you kind sir. Restarting the docker daemon after mount change, as you describe above, does indeed fix the issue :clap:
@euank Is there any way to do this at ECS instance boot time? When I add the following to my autoscaling configuration user data:
#!/bin/bash
# Create filesystem on the new EBS device:
mkfs -t ext4 /dev/sdb
mkdir /srv/data/
mount /dev/sdb /srv/data
echo "/dev/sdb /srv/data ext4 defaults,nofail 0 2" >> /etc/fstab
# Register as an ECS instance:
yum install -y aws-cli
aws s3 cp s3://<my-bucket>/ecs.config /etc/ecs/ecs.config
echo ECS_CLUSTER=default >> /etc/ecs/ecs.config
# Restart docker service to allow it to see new EBS volume
stop ecs && service docker restart && start ecs
I don't see the volume working. SSH'ing into the ECS instance and running stop ecs && service docker restart && start ecs
works, but this is much more difficult to automate. I've tried simple variations such as adding stop ecs && service docker stop
to the beginning of the user data script and then starting the services at the end, but to no avail.
If you're curious, here's how I ended up solving the problem. Add this to your user data for the ECS instance:
#upstart-job
description "Pre-ECS agent initialization"
start on (starting ecs or starting docker)
task
script
# Create filesystem on the new EBS device:
mkfs -t ext4 /dev/sdb
mkdir /srv/data/
mount /dev/sdb /srv/data
echo "/dev/sdb /srv/data ext4 defaults,nofail 0 2" >> /etc/fstab
# Register as an ECS instance with access to our private docker repositories:
yum install -y aws-cli
aws s3 cp s3://<my-bucket>/ecs.config /etc/ecs/ecs.config
echo ECS_CLUSTER=default >> /etc/ecs/ecs.config
end script
The trick here is to tie the mounting & formatting step to ecs & docker start via an upstart task, ensuring this task runs before ecs or docker are started on the ECS instance.
The problem is with stop ecs
which fails the first time since the service is not running.
Having the following in your user data script will also work without needing a upstart job.
#!/bin/bash
# Create filesystem on the new EBS device:
mkfs -t ext4 /dev/sdb
mkdir /srv/data/
mount /dev/sdb /srv/data
echo "/dev/sdb /srv/data ext4 defaults,nofail 0 2" >> /etc/fstab
# Register as an ECS instance:
yum install -y aws-cli
aws s3 cp s3://<my-bucket>/ecs.config /etc/ecs/ecs.config
echo ECS_CLUSTER=default >> /etc/ecs/ecs.config
# Restart docker service to allow it to see new EBS volume
service docker restart
start ecs
I have a task definition which mounts a directory from the host into the container for writing, like
/mnt/host
->/mnt/container
.I have an EBS volume mounted on the host at this path, for example
mount -t ext4 /dev/xvdf /mnt/host
.No matter what I do, writes that originate inside the container are ending up writing into the host's "mount point", ie
/mnt/host
underneath the mounted EBS volume. So writes are invisible on the host machine until I unmount the EBS volume, at which point they are revealed in the underlying mount point.Can anyone help me with this? I am using the latest ECS AMI (ami-ae6559c6) and Docker image built very simply that starts with
FROM ubuntu:14.04
.