Let's say we're deploying a Go application. The first container we build contains all of the build tools, has Go installed, etc. After we build our application, we're left with a binary that has very minimal dependencies to run. It would be nice to be able to build a second container that has only the binary we just built without the entire build toolchain. How do we share that information from one build to the next?
We can't really use mounted volumes unless we wrap each build in a docker run command (like docker itself does). Maybe this is a viable option.
Another option would be to use something like S3 - the binary could be synchronously pushed to S3 from the first container and then pulled down in the second container.
Following this same process, maybe the leader could provide some kind of similar data store that listens on localhost. This seems a bit dangerous though and a bit counter to the intended use of docker.
Consider the following example:
Let's say we're deploying a Go application. The first container we build contains all of the build tools, has Go installed, etc. After we build our application, we're left with a binary that has very minimal dependencies to run. It would be nice to be able to build a second container that has only the binary we just built without the entire build toolchain. How do we share that information from one build to the next?
We can't really use mounted volumes unless we wrap each build in a
docker run
command (like docker itself does). Maybe this is a viable option.Another option would be to use something like S3 - the binary could be synchronously pushed to S3 from the first container and then pulled down in the second container.
Following this same process, maybe the leader could provide some kind of similar data store that listens on localhost. This seems a bit dangerous though and a bit counter to the intended use of docker.
Thoughts?