It seems as though the plugin sees even local overlayfs mounts as attached volumes and therefore only allows a fraction (10-12 in our case) of EBS volumes to be attached to EC2 instance. See below.
Actual behavior
Information
A common failure in our infrastructure:
sp56lxzn54ako1jqwoudogz1v \_ rinkeby_homechain0.1 localhost:5000/polyswarm/stable-mainnet:latest@sha256:1f2734e6fccae20a10c5d1e2e7228cdcdd2591b0755ab54208ccce73f8704def ip-172-31-42-17 Shutdown Failed 6 seconds ago "starting container failed: error while mounting volume '': VolumeDriver.Mount: error mounting volume: All device names used!"
Actual block devices (see EBS attachments):
[user@ip-172-31-42-17 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdn 202:208 0 60G 0 disk /var/lib/docker/plugins/da8da20a40e838c65f99c464f7c3d59a0a220849338d73a1be32baecb64a1d24/propagated-mount/ebs/gamma_rinkeby1
xvdl 202:176 0 2G 0 disk
xvda 202:0 0 50G 0 disk
└─xvda1 202:1 0 50G 0 part /
xvdj 202:144 0 100G 0 disk
xvdh 202:112 0 35G 0 disk /var/lib/docker/plugins/da8da20a40e838c65f99c464f7c3d59a0a220849338d73a1be32baecb64a1d24/propagated-mount/ebs/gamma_homechain3
xvdp 202:240 0 35G 0 disk
xvdf 202:80 0 35G 0 disk
xvdo 202:224 0 35G 0 disk
xvdm 202:192 0 100G 0 disk
xvdk 202:160 0 2G 0 disk /var/lib/docker/plugins/da8da20a40e838c65f99c464f7c3d59a0a220849338d73a1be32baecb64a1d24/propagated-mount/ebs/core
xvdi 202:128 0 2G 0 disk
xvdg 202:96 0 35G 0 disk /var/lib/docker/plugins/da8da20a40e838c65f99c464f7c3d59a0a220849338d73a1be32baecb64a1d24/propagated-mount/ebs/gamma_sidechain
[user@ip-172-31-42-17 ~]$ lsblk | wc -l
14
However, the actual mountpoint count is much higher:
[root@ip-172-31-42-17 /root]# mount | wc -l
85
This makes me question how the cloudstor:aws is counting/limiting/filtering EBS attachments in code? Would sure be great if this plugin was open source.
Steps to reproduce the behavior
Launch container in swarm with at least 1 EBS volume (relocatable)
Is there any other way we can get more devs to notice this bug. This is a major hurdle right now, forcing us to use more ec2 instances, just so that volumes are spread more evenly.
Expected behavior
Upon stack deploy with
cloudstor:aws
, plugin should be able to attach up to 40 EBS volumes as specified here:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html
It seems as though the plugin sees even local overlayfs mounts as attached volumes and therefore only allows a fraction (10-12 in our case) of EBS volumes to be attached to EC2 instance. See below.
Actual behavior
Information
A common failure in our infrastructure:
Actual block devices (see EBS attachments):
However, the actual mountpoint count is much higher:
This makes me question how the
cloudstor:aws
is counting/limiting/filtering EBS attachments in code? Would sure be great if this plugin was open source.Steps to reproduce the behavior