Open kacole2 opened 9 years ago
Interesting proposal. Why not just make a docker-compose.yml
and have only containers which run the plugins you are interested in? e.g.:
pluginsdir:
volumes:
- /run/docker/plugins:/run/docker/plugins
nfsvolumes:
image: nathanleclaire/docker-volumes-nfs
volumes_from: pluginsdir
(Just FYI, this won't "Just Work", it's simply an example).
It's true that tying this together with Machine will require some bash glue today, but it's a simple solution at least.
@nathanleclaire docker-compose can do that today with volume drivers (wordpress-compose example)... But, you still have to go through an installation process on the host to make any plugin available to the containers. This goes for Flocker, Weave, Rexray, etc.
@kacole2 But you can run plugins in containers. You just have to expose the socket to the host in the directory where Docker expects it.
@nathanleclaire So I can think of a couple of things here that may present challenges for this scenario.
It is the chicken or the egg scenario. If containers that want their persistent data are relying on another container that sits aside them then there can be race conditions that ensue. Say the daemon restarts and the containers start, but the volume driver container has not started yet. The VD represents a critical service which needs some level of priority which today is being addressed on the startup of the services.
Shouldn't that be Docker Compose's responsibility, though? For instance, it should start containers that you are using volumes_from
on before the dependent ones.
Would the ability to automatically run a set of compose files when Docker Machine boots an instance help?
@nathanleclaire Sure there is some priority that can be addressed using compose. There are plenty of things to consider in that area. But the real deal breaker is #2 for functionality. I don't believe that mounting functionality can be addressed from within a container.
Docker Machine Extensions (or call it plugins)
Abstract
Docker Machine gets you a "docker ready" host, but what about everything else that integrates with Docker Engine? Docker Engine 1.8 released the pluggable architecture for 3rd party network and storage extensions. However, most of these require some installation process that must be done after the host is provisioned. The Docker Machine Extensibility feature is an interface for specifying the installation of Docker Engine Plugins to install during the host deployment.
Motivation
Docker machine could be a replacement for Chef/Puppet/Ansible/Salt but lacks some of the ability to have a completely configured "docker ready" host. As more extensions for Storage, Networking, Security, etc become available, many users will want those installed during the Docker Machine provisioning/configuration process.
Interface
After reading #1626 it seems that everyone is looking for a standardized interface for building drivers. The same would go for the extensions. The interface would need:
Implementation
This is a proposal. Please discuss... After the interface is setup I envision this taking a command from
docker-machine create
with a flag called--extension
. The user would specify a YAML or JSON file that would allow multiple key:value pairs to be used. This is useful because:docker-machine create --provider virtualbox --extension something.yml
or
docker-machine create --provider virtualbox --extension something.json
The extension files will be in
libmachine/extensions
. The interface will be inextensions.go
and each extension will need it's own.go
file that corresponds with the title in the YML/JSON file. That file specifies the installation process using theprovisioner.SSHCommand
method.Possible Issues
latest
if pulling from GitHub.scp
to copy the file to the host to perform the installation? Perhaps we can use Propsal #179 for this?Other
I'm looking for more input on the architecture but I want to start building this functionality for the first PR. Let me know your thoughts