moby / moby

The Moby Project - a collaborative project for the container ecosystem to assemble container-based systems
https://mobyproject.org/
Apache License 2.0
68.72k stars 18.67k forks source link

How do I combine several images into one via Dockerfile #3378

Closed anentropic closed 10 years ago

anentropic commented 10 years ago

I have several Dockerfiles to build images which eg setup a postgresql client, set up a generic python app environment

I want to make a Dockerfile for my python webapp which combines both those images and then runs some more commands

If I understood the docs correctly, if I use FROM a second time I start creating a new image instead of adding to the current one?

SvenDowideit commented 10 years ago

you Chain them :)

so for example, if you have one Dockerfile that sets up your generic postgres client and generic python app env, you tag the result of that build (eg mygenericenv), and then your subsequent Dockerfiles use FROM mygenericenv.

for eg

## Dockerfile.genericwebapp might have FROM ubuntu
cat Dockerfile.genericwebapp | docker build -t genericwebapp -
## Dockerfile.genericpython-web would have FROM genericwebapp
cat Dockerfile.genericpython-web | docker build -t genericpython-web -
## and then this specific app i'm testing might have a docker file that containers FROM genericpython-web
docker build -t thisapp .
anentropic commented 10 years ago

I can see how to do that, i.e. genericA --> specificA but is there any way to do something like:

genericA --
            \
             ---> specificAB
            /
genericB --

?

tianon commented 10 years ago

Not through any official means, but some people have had luck manually modifying the image hierarchy to achieve this (but if you do this, you do so at your own risk, and you get to keep all the pieces).

The reason this won't be supported officially is because imagine I want to take "ubuntu" and graft "centos" on top. There will be lots of really fun conflicts causing a support nightmare, so if you want to do things like that, you're on your own.

anentropic commented 10 years ago

Ok I see why. I was looking for composable blocks of functionality but maybe this isn't the Docker use case... seems like I should be using it to set up the raw containers then run something like ansible or saltstack on top to configure the software in them.

shykes commented 10 years ago

The idea behind containers is that the smallest unit of real composition is the container. That is, a container is the smallest thing you can produce in advance, not knowing what else it will be combined with, and have strong guarantees of how it will behave and interact with other components.

Therefore, any unit smaller than a container - be it a ruby or shell script, a c++ source tree, a binary on its own, a set of configuration files, a system package, etc. - cannot be safely composed, because it will behave very differently depending on its build dependencies, runtime dependencies, and what other components are part of the composition.

That reality can be partially masked by brute force. Such brute force can be pragmatic and "good enough" (giant Makefile which auto-detects everything for a more portable build of your app) or overly grandiose ("let's model in advance every possible permutation of every dependency and interference between components, and express them in a high-level abstraction!")

When you rely on Ansible, Chef or any other configuration management to create "composable components" you are relying on a leaky abstraction: these components are not, in fact, composable. From one system to the next they will produce builds which behave differently in a million ways. All the extra abstraction in the end will buy you very little.

My advice is to focus on 2 things: 1) the source code, and 2) the runnable container. These are the only 2 reliable points of composition.

On Sun, Dec 29, 2013 at 1:46 PM, anentropic notifications@github.com wrote:

Ok I see why. I was looking for composable blocks of functionality but maybe this isn't the Docker use case... seems like I should be using it to set up the raw containers then run something like ansible or saltstack on top to configure the software in them.

Reply to this email directly or view it on GitHub: https://github.com/dotcloud/docker/issues/3378#issuecomment-31326172

anentropic commented 10 years ago

Thanks for giving more perspective.

So you're saying that for reusing parts of Dockerfiles the only tool available is copy and paste? Coming from more of a 'dev' than 'ops' point of view it feels a bit wrong.

Maybe it's a mistake having the public index of images, it makes it seem like you can share reusable building blocks vaguely analogous to Chef recipes, but my experience so far is it is not useful because: a) for most images there's no info about what it does and what's inside b) the docs encourage committing your work to the index (so you can later pull it) even though what you made is probably not useful to others, I'm guessing most of what's in there is probably not worth sharing

I feel like the docs don't really guide you to use Docker in a sensible way at the moment

unclejack commented 10 years ago

@anentropic The right way to do this with Dockerfiles is by building multiple images with multiple Dockerfiles. Here's an example: Dockerfile 1 builds a generic image on top of an Ubuntu base image, Dockerfile 2 uses the resulting image of Dockerfile 1 to build an image for a database servers, Dockerfile 3 uses the database server image and configures it for a special role.

docker build should be quite easy to run and unnecessary complexity shouldn't be added.

The public index of images is extremely useful. Docker images are usually meant to run one service or a bunch of services which can't run in separate containers. You can usually pull an image, run it and get some useful software up and running without much effort.

anentropic commented 10 years ago

Understood... so in the scenario I outlined with ascii art above, the Docker way would be:

The problem I see is that if the 'recipe' (to borrow a Chef term) for GenericB is quite complex and has many steps there is no way I can share this info, except by publishing the Dockerfile to Github so that others can copy and paste the relevant parts into their own Dockerfile.

Have you tried using the public index? For example, I did a search for "postgres"... how do I judge the usefulness of (or distinguish in any way between) images such as these:

?

What value do these provide when the only way to be sure I have got a Postgres server set up the way I want, on a particular base image, with nothing dodgy hidden in there, is going to be to create it myself from scratch.

I can see the value of some 'officially blessed' base images in a public index. I can see the value of having a private index of my own custom images ready to pull from.

But it seems a shame that there's no way (apart from copy & paste) to share the series of commands in the Dockerfile as a recipe... such as the suggestion for an 'include' command that was rejected here https://github.com/dotcloud/docker/pull/2108

unclejack commented 10 years ago

@anentropic You can use a trusted image and you can also find a postgres Dockerfile to build the image yourself.

Images are usually more useful when you customize the Dockerfile to ensure they fit your exact needs. That's why you've discovered that more users have uploaded an image for the same piece of software to the registry.

Existing specific images like the postgres images might not meet your particular needs, but there are also base images and these can be used right away to build something which is useful for you.

Base images like ubuntu, centos and some images from stackbrew/* are images you can use to build what you need.

An example of a great ready to use image is stackbrew/registry. This image lets you play around with a private Docker registry as soon as docker pull stackbrew/registry and docker run -p stackbrew/registry are done executing.

Docker's goal is to help with deployment and with preparing the environment where your software runs. This means that builds are linear and done only during the initial build, but you will run the exact same software every single time.

Configuration management systems may allow you to do something more or employ some other tricks, but they're not as "immutable" and you can end up having two hosts which have subtle differences which aren't picked up by the configuration management software.

jakirkham commented 9 years ago

Hate to necro an old thread, but wanted to offer something that IMHO helps resolves the original posters problem and may help others looking for a similar solution to this problem here.

Let us assume for simplicity that they all use the same base image R. Imagine I have service A and service B. I want them in separate Docker images and both on the same Docker image.

Write a script to install service A and write a separate script to install service B. Then have a git repo with the script for A and another one for script B. Create git repos for all three Docker images that will be built. Each contains git submodules with the install script(s) that will be used. Each Dockerfile will simply ADD an install script and then RUN the install script and do this for one or both scripts. If you wish to remove the script(s) from the image, tack that on after running it.

This way there is one copy of each install script and any docker images you want using them. This avoids unnecessary copying of code and keeps the maintenance burden minimal. The only duplication of effort is moving up the commit used by the submodules, which is significantly better than the alternative and probably could be automated.

rjurney commented 8 years ago

I think I mis-understand how this works, so I'm replying to get clarification. I want to use Ubuntu 11 with the official selenium docker images. They use Ubuntu 15.

https://github.com/SeleniumHQ/docker-selenium/blob/master/Base/Dockerfile

What is the correct way for me to do this? To clone that repo and edit all the files to say Ubuntu 11 and not 15? This can't be right, can it? This would mean that everyone with any disagreement with any aspect of official images can't make use of them without duplicating the code for them. I think I have it wrong, can someone explain? What is the right way to use the official selenium image with Ubuntu 11?

thaJeztah commented 8 years ago

@rjurney yes, that's how that would work; in your example, the whole Dockerfile is developed with ubuntu:15.04 in mind; are those packages available on ubuntu:11? Do they work? Does selenium run on them? Chances are that modifications need to be made in the Dockerfile to make it work on another version of Ubuntu.

"swapping" the base image of an existing image also wouldn't work, because Docker only stores the differences between the base-image and the image. Using a different base-image therefore leads to unpredictable results (e.g., "remove file X", where "file X" exists in the original base image, but not in the base image you selected). Also, the packages/binaries in images building "on top" of a base images, are packages that are built for that version, those binaries may not be compatible with a different base image.

This would mean that everyone with any disagreement with any aspect of official images can't make use of them without duplicating the code for them

Yes. The official images are supported by the maintainers of those images (which in this case, are the maintainers of Selenium). If you think changes are needed to those images, the best way is to open a feature request in their repository. If that feature request is not accepted, you should probably build your own version.

(Also note that there is not official ubuntu:11 image)

rjurney commented 8 years ago

In the rest of the software world, single inheritance is not seen as adequate to reasonably express needed semantics. It leads to much code duplication, which would be considered a bug. Why is this seen as acceptable for docker? Even if you're building one service at a time, composition is needed at the operating system level. I don't mean to beat a dead horse, but this limit seems a little extreme. Might it be better expressed as a best practice? As a result of the strictness of this decision, someone will build a tool that does composition or multiple inheritance and expresses them through single inheritance and duplication. Having this be outside docker proper will not serve the docker community.

On Wednesday, December 9, 2015, Sebastiaan van Stijn < notifications@github.com> wrote:

@rjurney https://github.com/rjurney yes, that's how that would work; in your example, the whole Dockerfile is developed with ubuntu:15.04 in mind; are those packages available on ubuntu:11? Do they work? Does selenium run on them? Chances are that modifications need to be made in the Dockerfile to make it work on another version of Ubuntu.

"swapping" the base image of an existing image also wouldn't work, because Docker only stores the differences between the base-image and the image. Using a different base-image therefore leads to unpredictable results (e.g., "remove file X", where "file X" exists in the original base image, but not in the base image you selected). Also, the packages/binaries in images building "on top" of a base images, are packages that are built for that version, those binaries may not be compatible with a different base image.

This would mean that everyone with any disagreement with any aspect of official images can't make use of them without duplicating the code for them

Yes. The official images are supported by the maintainers of those images (which in this case, are the maintainers of Selenium). If you think changes are needed to those images, the best way is to open a feature request in their repository. If that feature request is not accepted, you should probably build your own version.

(Also note that there is not official ubuntu:11 image)

— Reply to this email directly or view it on GitHub https://github.com/docker/docker/issues/3378#issuecomment-163188299.

Russell Jurney twitter.com/rjurney russell.jurney@gmail.com relato.io

cpuguy83 commented 8 years ago

@rjurney multiple inheritance is also extremely complex and not just something you just add in without thought for consequences, corner cases, and incompatibilities.

12749 was the latest attempt to add such functionality -- ultimately declined because there is other work to be done first.

There's a lot of work being done on the builder, including enabling client-driven builds which can open this up quite a bit.

Single inheritance Dockerfiles works for the (vast) majority of use cases, as such there is no rush to enhance this. It needs to be done correctly and deliberately. And based on your comments above I'd say you don't actually need multiple inheritance, just a way to specify a base image that the Dockerfile is run against without duplicating the existing code.

rjurney commented 8 years ago

That would satisfy my needs, yes. Being able to modify some property of the chain of dockerfiles.

Ok, glad to hear you are on top of this. Thanks for your patience :)

On Wed, Dec 9, 2015 at 9:59 AM, Brian Goff notifications@github.com wrote:

@rjurney https://github.com/rjurney multiple inheritance is also extremely complex and not just something you just add in without thought for consequences, corner cases, and incompatibilities.

12749 https://github.com/docker/docker/pull/12749 was the latest

attempt to add such functionality -- ultimately declined because there is other work to be done first. There's a lot of work being done on the builder, including enabling client-driven builds which can open this up quite a bit.

Single inheritance Dockerfiles works for the (vast) majority of use cases, as such there is no rush to enhance this. It needs to be done correctly and deliberately. And based on your comments above I'd say you don't actually need multiple inheritance, just a way to specify a base image that the Dockerfile is run against without duplicating the existing code.

— Reply to this email directly or view it on GitHub https://github.com/docker/docker/issues/3378#issuecomment-163340165.

Russell Jurney twitter.com/rjurney russell.jurney@gmail.com relato.io

docbill commented 8 years ago

@rjurney Where do you get your information. To my knowledge Java has never had multiple inheritance, and never will. I'm sure the same is true for many languages. Many consider multiple inheritance extremely harmful, as it can result in almost impossible to predictable code. The same would be true for a docker container.

As I see it, what we need for docker is not the concept of multiple inheritance, but the concept of an include or external dependencies. e.g. You can mount containers at run time. What is truly needed is a way to to the equivalent with images. So you could for example have an imaged that was defined to be based on Fedora 22, and mount an oracle image to add database functionality.

This can be done quite successfully when running containers, but there is just no syntax for specifying it with images. So until run-time there is no way docker can know about these dependencies or in anyway manage them for you.

rjurney commented 8 years ago

Please note that I mentioned multiple inheritance and composition. Composition is the preferred way to do this, definitely.

I agree with everything else you said, so +1.

On Wednesday, December 9, 2015, Bill C Riemers notifications@github.com wrote:

@rjurney https://github.com/rjurney Where do you get your information. To my knowledge Java has never had multiple inheritance, and never will. I'm sure the same is true for many languages. Many consider multiple inheritance extremely harmful, as it can result in almost impossible to predictable code. The same would be true for a docker container.

As I see it, what we need for docker is not the concept of multiple inheritance, but the concept of an include or external dependencies. e.g. You can mount containers at run time. What is truly needed is a way to to the equivalent with images. So you could for example have an imaged that was defined to be based on Fedora 22, and mount an oracle image to add database functionality.

This can be done quite successfully when running containers, but there is just no syntax for specifying it with images. So until run-time there is no way docker can know about these dependencies or in anyway manage them for you.

— Reply to this email directly or view it on GitHub https://github.com/docker/docker/issues/3378#issuecomment-163351035.

Russell Jurney twitter.com/rjurney russell.jurney@gmail.com relato.io

rjurney commented 8 years ago

I'm going to shut up after this, but I put this rant in the aforementioned pull request instead of this ticket, by mistake. So I'm putting it here.

Someone is going to build this. Not accepting a pull that adds INCLUDE will delay and externalize this feature. This should be the basis of the decision here: should this be inside docker or outside docker?

An example comes to mind. In Apache Pig, the team made the decision not to include loops, despite many requests for them, because it was decided that Pig should be great for DAG dataflows and that is it. Instead, an integration was created to script pig scripts, so you could loop through scripts from any JVM language. Note that this was a conscious decision and that alternatives were pursued. This is the model process in my opinion.

Another Pig example comes to mind... Pig Macros. They didn't exist and were 'un pig' until someone (ok, me) started a thread about how incredibly ugly their large pig project was and that there was no way to fix this problem without generating Pig from an external tool, which was undesirable. Many people chimed in, and the Pig team added macros. Macros make clean pig possible, and the community benefitted.

I suggest that you address the decision head on and have a discussion around it, which hasn't occurred here yet, and for findability probably belongs here. This will exist. Duplicating scripts in domain specific languages is terrible. The people will demand it. Will this feature be inside Docker or outside Docker? How will you facilitate this behavior outside of docker?

Sorry, I'm probably missing lots of context on the mailing list, but as a new Docker user... I feel very hesitant to do much with Docker without the ability to compose dockerfiles from existing recipes. I went down this road with Pig, and it nearly killed me. I think many people will feel this way.

In case anyone cares...

The half-adopted presentation about loops and macros in Pig: http://wiki.apache.org/pig/TuringCompletePig Pig Macro JIRA: https://issues.apache.org/jira/browse/PIG-1793 API Interface to Pig JIRA: https://issues.apache.org/jira/browse/PIG-1333 One that was outright rejected to respect Apache Hive... add SQL to Pig: https://issues.apache.org/jira/browse/PIG-824

Finally, I had an idea that might make this change easy... what if INCLUDE'd files can't inherit? i.e. you would avoid objections by keeping things super simple. Deal with the rest later as more is learned. There could be a simple Dockerfile for instance that installs the pre-req's and binaries, and sets up daemons for MySQL on Ubuntu. If need be, this could be versioned by version of Ubuntu and MySQL. Personally, I'm going to hack a utility to do these simple INCLUDEs and use it to organize my dockerfiles in this way. I can't wait to order and re-use my code.

DJGummikuh commented 8 years ago

+1 for the INCLUDE idea. Though I believe prohibiting inheritance will only shift the issue, since now you would be able to modify the mainstream image you're inheriting from but not the other images you include. Basically what would make sense would be if you could specify an image to be "includable" in that it does not deliver any operating system stuff that might break existing base image stuff. This flag would have to be set by the docker build process and would prevent non-adequately flagged images to be included. And I mean let's face it. If you're playing with Dockerfiles you're probably not a person that is seeing his machine for the first day so I would believe that while it makes sense to prevent the end user of docker to do stupid things, there should be a little more freedom for the guys that actually create those images. And I mean seriously, being able to select a base image and including all the stuff I want into it to provision my app would be pretty damn awesome.

parliament718 commented 8 years ago

+1 for INCLUDE. I simply need nginx and ssh image combined in one. Why does this have to be so hard?

rjurney commented 8 years ago

The idea that this isn't needed is frankly confusing to the point of being disingenuous. Most users will use this, if it is created. "Add ssh to ubuntu" and "add nginx to ubuntu" are pretty common tasks that everyone need not repeat. What docker HQ really seems to be saying on this is, "Obviously needed, but we think it will get too ugly. So we pretend." It would be better if you could actually just be honest and open about this. Sorry if I'm cranky.

On Sat, Jan 23, 2016 at 6:22 PM, Vazy notifications@github.com wrote:

+1 for INCLUDE. I simply need nginx and ssh image combined in one. Why does this have to be so hard?

— Reply to this email directly or view it on GitHub https://github.com/docker/docker/issues/3378#issuecomment-174243875.

Russell Jurney twitter.com/rjurney russell.jurney@gmail.com relato.io

vdemeester commented 8 years ago

@rjurney let's wait for the build spin-out ; because this way, there will be more than one way to build images (and thus a custom builder could appear that does that). One of the reason docker maintainers (working or not working for Docker) are frisky about it, is because it would add complexity where we want to add flexibility and simplicity. By extracting the builder, we'll have better separation of concern (between building images and running them) and lots of use-case will be more freely implemented in custom builders.

rjurney commented 8 years ago

Here again, are you pushing this out of the project? Custom sounds... not the default, included way. When in fact, includes are a simple need that most everyone has. Repeating yourself is complexity. Inheritance only is complexity. Includes match a need everyone e has in the simplest way possible.

On Sunday, January 24, 2016, Vincent Demeester notifications@github.com wrote:

@rjurney https://github.com/rjurney let's wait for the build spin-out ; because this way, there will be more than one way to build images (and thus a custom builder could appear that does that). One of the reason docker maintainers (working or not working for Docker) are frisky about it, is because it would add complexity where we want to add flexibility and simplicity. By extracting the builder, we'll have better separation of concern (between building images and running them) and lots of use-case will be more freely implemented in custom builders.

— Reply to this email directly or view it on GitHub https://github.com/docker/docker/issues/3378#issuecomment-174423973.

Russell Jurney twitter.com/rjurney russell.jurney@gmail.com relato.io

mcraveiro commented 8 years ago

+1, combining images would be extremely useful. Imagine a (god forbid) C++ use case. I build an imagine with boost, another with say Qt, all with the same compiler, etc. Now say I want to build an app with both boost and Qt, I just need to combine the two and presto - a dev environment ready. This would be incredibly useful.

jakirkham commented 8 years ago

Personally, I feel this is too important of an issue not to tackle. That being said we need to get a good understanding of what the problems and scope are regardless of where it is implemented.

So, I see these problems presented by merging.

  1. Handling merge conflicts.
  2. Resolving different bases (Ubuntu and CentOS).

With the first one I think the simple answer is don't. To me it sounds to complicated and potentially problematic and would require suite of tools to solve and still might be too magical. So, if this were added merging conflicts should just fail. I suppose it could be revisited later, but that seems like more trouble than it is worth.

As for the second case, it seems like you could add a constraint that they share some base layers. Now the question becomes how many is enough. I think the correct answer when starting would be the two images being merged must have the same FROM image. There might need to be more constraints here, but it isn't clear to me that those case wouldn't fall under problem 1, which have resolved by simply disallowing it.

Are there some other problems I am missing here?

anentropic commented 8 years ago

I think there should be no attempt to merge... I can't see that happening

A more realistic approach might be a templating type of solution, i.e. allow to INCLUDE a Dockerfile fragment (which has no FROM clause, just a list of commands) into a real Dockerfile... the fragments can be shared, reused, and included against any compatible base image Dockerfile

luminapps-zz commented 8 years ago

http://docktitude.io

juanmirocks commented 8 years ago

I am completely new to Docker and learning humbly. But I thought the main point of Docker was to build very small reusable applications to later combine them in whatever ways to great big final applications as in a web app. If that is so, IMHO a statement like INCLUDE is mandatory.

thaJeztah commented 8 years ago

@jmcejuela in many cases "reuse" is creating images dedicated to a specific service, and combining those images/containers to form your application. The individual components of your application are reusable (possibly, only the configuration of the container differs), but the way you combine them forms the actual application.

juanmirocks commented 8 years ago

@thaJeztah I understand, thank you.

But making it concrete like people posted before, say I build a web app that runs a scala application (image A), then make the web server with nginx (image B), then have ssh (image C), and need an extra python application (image D). Say I've created 4 Dockerfile's for each. How do I combine them with Docker to create my final web app (image E ?)

I just need a simple way to do this. I don't care about philosophy disputes on multiple inheritance, include or not, compose or not, etc. Though certainly I wouldn't like to copy & paste as was proposed before.

Thank you so much for your time. I am still learning Docker.

thaJeztah commented 8 years ago

@jmcejuela you wouldn't combine the images, you would run them as separate containers, and have them cooperate to form the application. You can do so using Docker Compose, which allows you to define your "stack". For example, see https://github.com/docker/example-voting-app/blob/master/docker-compose.yml (and the README; https://github.com/docker/example-voting-app/blob/master/README.md)

For the "ssh" part, it really depends what you want to use it for; overall, containers are considered "immutable", so you won't ssh into a container and modify it, but spin up a new container to replace the old one; data that needs to persist beyond a container's lifecycle is then stored in a volume, so that the new container can use those files.

RomanSaveljev commented 8 years ago

@jmcejuela Docker builder accepts Dockerfile contents on STDIN, so one could "relatively" easily generate one? If a context has to be passed along, then everything should be tarred and fed into docker build. To my experience this is the simplest possible way to get a composition.

I am developing (and playing with) an approach, which builds off the above concept. A nodejs application prepares TAR file in memory (with Dockerfile and added files) and dumps it to STDOUT. The STDOUT gets piped into docker build. Composable parts are versioned, tested and released as NPM modules. I put up a very short example, which demonstrates a testing image for crond - http://pastebin.com/UqJYvxUR

juanmirocks commented 8 years ago

Thanks @thaJeztah In the end I just need a single file that my co-developers can run to have the whole dev stack, and then be able to run it on prod too if needed. I will look more deeply into docker compose.

jakirkham commented 8 years ago

Also, INCLUDE was proposed a long time ago ( https://github.com/docker/docker/issues/735 ).

rjurney commented 8 years ago

@jmcejuela The fact is that most docker users install and use ssh to setup containers and fix issues in a running container. This is how docker is actually used.

anentropic commented 8 years ago

Only if you're doing it wrong, the docker exec command has been around for quite a while now and I never needed ssh since...

rjurney commented 8 years ago

@anentropic That only holds if you are deploying simple services without dependencies. If you have a complex chain of dependencies for any service, anything involving machine learning, for instance, you will be duplicating code to deploy services. And there is no good reason you should be doing that. Just because docker is a domain specific language doesn't mean the bulk of knowledge about programming languages is thrown out the door and none of the old lessons apply. Sanity still needs to matter. Copying and pasting recipes is insanity.

It also only holds if you subscribe to the 'single service' worldview, which is not all docker users.

RomanSaveljev commented 8 years ago

@anentropic According to Docker roadmap provisioning running containers through docker exec may be(come) equally wrong.

P.S. The rkt engine has hit v1.0.

jakirkham commented 8 years ago

@rjurney, :100:

jakirkham commented 8 years ago

Multiple inheritance whether loved or hated is a complex feature and will undoubtedly have resistance. Include turns Dockerfiles from a build recipe into a language with path problems that are challenging to resolve.

What if we look at the problem differently. What if we were able to "ADD/COPY" select files from another docker image into one that is being built. This way one can benefit from reusing functionality and avoid code duplication. As we are not using FROM multiple times in an image, but just copying binaries over in an explicit manner, this should behave in a well defined manner and when it doesn't it is a failure. Given that this works with docker images and it is able to leverage registries as the solution as opposed to some new search path, I would hope this is a reasonable proposal. An added bonus is that we don't have to rerun the same code multiple times either. Also, hopefully, a massive change to the builder could be avoided. Thoughts?

Maybe this is proposed elsewhere, in which case a link would be nice.

alonbl commented 8 years ago

Hello, Whatever solution that is selected, preparing an image from multiple independent sources is something that I was very surprised that is impossible. I would have liked to skip image preparation as in runtime we can perform this process, so that at runtime a set of images will be deployed, no need to remake image every time a dependency is modified. I searched for alternatives, have not yet found any valid, this is a major usage gap. Looks quite easy to perform using ACI. Thanks!

dylanclement commented 8 years ago

:+1: would love a solution to this and glad it is at least being talked about. Even if it requires base images to be the same.

jakirkham commented 8 years ago

Turns out copying from other images is proposed elsewhere. This is the issue ( https://github.com/docker/docker/issues/18596 ).

sleaze commented 8 years ago

thanks @jakirkham ..

+1 for docker multiple inheritance functionality

sleaze commented 8 years ago

EDIT:

Also see: https://github.com/docker/docker/issues/13026

rjurney commented 8 years ago

I think the problem you're running into is that the inability to compose recipes doesn't make sense. Docker compose is great for using multiple containers in an application. Docker swarm is great for doing same with multiple nodes. But there is no way to include the work of others at the source-code level, in many cases. You must inherit once or recreate it, which is limiting.

On Fri, Mar 18, 2016 at 9:01 AM, Alvin Chevolleaux <notifications@github.com

wrote:

The reply by @thaJeztah https://github.com/thaJeztah is very enlightening. I'm new to Docker and don't understand why you can't combine multiple images together but Docker Compose seems to be the solution to combining multiple containers into one application that I was looking for.

I think the problem for me is that I thought I understood Docker at first but am now finding out that I don't. I'm going to go back and do some more reading!

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/docker/docker/issues/3378#issuecomment-198426036

Russell Jurney twitter.com/rjurney russell.jurney@gmail.com relato.io

alvinchevolleaux commented 8 years ago

@rjurney Yes after looking into Docker Compose a bit more you're correct, that is exactly my confusion. For example there is a PHP image and a Centos Image but no inheritance between different images so it's sort of all or nothing. In the official PHP image it's using debian:jessie but I want my setup to be Centos based, so it seems that if I want to use a particular image I must accept the rest of the setup or copy and paste the source Dockerfile and roll my own image from scratch, there doesn't seem to be a middle ground where I can mix and match images.

EDIT: Just to clarify I understand why you can't mix Ubuntu and Centos based images together but I don't see why you couldn't have some sort of a hierarchical structure. Then instead of downloading an entire image you'd just download the changes from one image to the other.

tejasmanohar commented 8 years ago

INCLUDE would be insanely useful for me as well. Without it, I'm left to copy-paste.

FranklinYu commented 8 years ago

@RomanSaveljev I don't get it:

According to Docker roadmap provisioning running containers through docker exec may be(come) equally wrong.

It does not say that docker exec will get deprecated. docker exec has always been a debugging tool, and so should SSH in a Docker container.

rjurney commented 8 years ago

I feel foolish for participating in this, but what the hell... I'll suggest this again:

Why don't we simplify the issue and start by implementing INCLUDE so that it does not allow inheritance? In other words, you can only include files that have no FROM.

That would handle many use cases, and the impetus would be on the files people INCLUDE to work on any reasonable operating system. uname exists for a reason. This would be a first step, and feedback on this implementation would help define anything further.

That seems like an easy decision to make. It would not be a ton of work. It would not be complex. Right?