Closed hseipp closed 6 years ago
@aswarke @yadaven is this ticket still relevant?
FYI - just retested with Spectrum Scale 5.0.0.1 with everything else (Ubiquity, Docker) unchanged, getting the same error as described above. Please let me know when you've got working Ubiquity head code for Scale, will then re-test again.
@hseipp Capabilities API along with the new plugin v2 architecture will be addressed and fixed in the next ubiquity release for Scale. We will keep you updated on this. The default mode in Docker Swarm allows for Manager Node to run tasks as well. If you'd like the tasks to only run on worker nodes and not the Manager node then you need to put the Manager in DRAIN state. For more information please refer to https://docs.docker.com/engine/swarm/swarm-tutorial/drain-node/
@aswarke Thank you for the feedback, I am looking forward to testing the next ubiquity release, but please note that with Ubiquity 0.4.0 in the initial state, tasks are only run on the master node, ie. the worker nodes are never used unless someone issues mmdsh -N <all docker nodes> docker volume ls
.
tried the pluginv2 and scale 5.0.1.. and seems we still have that issue with nodes, being afterwards added to the swarm can not execute the plugin Gaurang is helping me.. / keep you posted the container simply does not start
update : after changing the version in the compose file to 3.3 version: "3.3" docker restart ... the port 9999 is distributed as expected.. so pluginv2 works also on newly added worker nodes
swarm support is not relevant for now. if needed we will open it in the future
After starting Ubiquity 0.4.0 and docker 17.03.2 on RHEL7.4 with Spectrum Scale 4.2.3.5, docker info does not show ubiquity as plugin although the plugin is loaded. After I (successfully) create a volume using
docker volume create -d ubiquity --opt backend=spectrum-scale --name demo1
docker info does show ubiquity (see output below). But then, when I start a service that requires an ubiquity volumedocker service create --name helloworld --mode global --mount type=volume,volume-driver=ubiquity,source=demo1,destination=/mnt myregistry:5000/alpine /bin/sh -c 'sleep 4 && ip a > /mnt/
hostname&& ping myhost >> /mnt/
hostname'
the service does only get executed on the leader node. However, when I do ammdsh -N all docker volume ls
after launching the service, the container instances for the service "automagically" start. Note that this might be related to the missing capabilities API, I'm getting a lot ofdockerd: time="2017-12-18T14:23:37.036066601+01:00" level=warning msg="Volume driver ubiquity returned an error while trying to query its capabilities, using default capabilties: VolumeDriver.Capabilities: 404 page not found\n"
log entriesDocker info output: