Open darkl0rd opened 5 years ago
The problem seems to come forth out of the fact that AWS has decided to name their devices differently. On the instance they are named '/dev/xvdf', whereas in the OS the device is named /dev/nvme2n1.
$ docker run -ti --rm -v test:/tmp alpine /bin/sh
docker: Error response from daemon: VolumeDriver.Mount: error mounting volume: failed to open device to probe ext4: open /dev/xvdf: no such file or directory.
$ ln -sf /dev/nvme2n1 /dev/xvdf
$ docker run -ti --rm -v test:/tmp alpine df -h /tmp
Filesystem Size Used Available Use% Mounted on
/dev/xvdf 975.9M 2.5M 906.2M 0% /tmp
Its the same problem as #184
Expected behavior
My volume to be mounted inside the container when defining it either on the CLI or through a compose-file.
Actual behavior
A diversity of errors when mounting the volume.
When attempting to start a docker swarm service, which refers to a volume; the volume is also created successfully but unable to be mounted in the resulting service.
Sometimes the error is different, and does it not complain about the device name but about the filesystem (ext4).
Information
docker 18.06.1-ce docker-compose 1.21.2 docker4x/cloudstor 18.06.1-ce-aws1
plugin installed as follows:
IAM policy created as required (Conform the CF template). Even went one step further and set the permissions to allow ec2:*.