As a dotmesh dev team, we'd like to be able to easily connect to a dotmesh cluster with dm so we can diagnose broken dotmesh installs - including things like runners, which set dotmesh up so that it's not exposed to the host network.
Currently, to do this on a runner, you need to docker exec into a container attached to the right network, download dm from get.dotmesh.io/unstable/master/Linux/dm, chmod +x it, examine the dotmesh-server-inner container's environment to get the initial admin password, do a dm remote add to get at it... and then you're in. Until the container gets recreated.
However, it wouldn't be too hard to put the dm binary in the dotmesh-server image in root's $PATH, and have require_zfs.sh spit out a working configuration in ~root/.dotmesh/config so we can just exec in and run dm. This would save a fiddly series of steps every time we need to go in and have a look.
ACs
[ ] It's possible to, on a docker install with a dotmesh server running, type (for instance) docker exec -ti dotmesh-server dm list and get back a list of dots.
As a dotmesh dev team, we'd like to be able to easily connect to a dotmesh cluster with
dm
so we can diagnose broken dotmesh installs - including things like runners, which set dotmesh up so that it's not exposed to the host network.Currently, to do this on a runner, you need to
docker exec
into a container attached to the right network, downloaddm
fromget.dotmesh.io/unstable/master/Linux/dm
,chmod +x
it, examine thedotmesh-server-inner
container's environment to get the initial admin password, do adm remote add
to get at it... and then you're in. Until the container gets recreated.However, it wouldn't be too hard to put the
dm
binary in thedotmesh-server
image in root's$PATH
, and haverequire_zfs.sh
spit out a working configuration in~root/.dotmesh/config
so we can justexec
in and rundm
. This would save a fiddly series of steps every time we need to go in and have a look.ACs
docker exec -ti dotmesh-server dm list
and get back a list of dots.