codeship / codeship-tool-examples

Examples to get you started using Codeship Pro. Download the local CLI to follow along with these examples locally.
http://bit.ly/codeship-jet-tool
MIT License
52 stars 31 forks source link

Updating the deployment example to use volumes, not configure #10

Closed bfosberry closed 9 years ago

ngauthier commented 9 years ago

Yay

jschley commented 9 years ago

:+1:

flomotlik commented 9 years ago

Running jet steps in the 8.deployment-container folder fails on my local machine.

A general questions I'm not sure how this specific example would show creating a deployable container as the container doesn't actually contain the build artifact. Its stored in a volume and when I push the container to a registry would the volume be pushed with it?

From my understanding the volume would only be available on the build machine thus once we push the container somewhere else its basically empty (and creates a new volume there).

To make sure we have some kind of general test infrastructure for this repo I set up services and steps file and added it to our org on Codeship. Build running here: https://codeship.com/projects/103513/builds/f090ac3c-3f2f-498c-9b61-0c46b56715d6

@bfosberry I can send you the log output as well

flomotlik commented 9 years ago

Ran with the latest version again and now it worked fine on my machine. But there is still the question if this actually creates a container with the artifact?

ngauthier commented 9 years ago

Volume won't be stored with the container. Idea is you:

Build artifact in container with volume mounted as /artifacts

Build production container with volume mounted as /artifacts. During the build you:

RUN cp /artifacts/app /app

So /app is not a volume, thus it is part of the container.

It's like using a USB key between computers. On Sep 19, 2015 9:36 AM, "Florian Motlik" notifications@github.com wrote:

Ran with the latest version again and now it worked fine on my machine. But there is still the question if this actually creates a container with the artifact?

— Reply to this email directly or view it on GitHub https://github.com/codeship/codeship-tool-examples/pull/10#issuecomment-141668363 .

flomotlik commented 9 years ago

@ngauthier yup thats what I thought too, but the Dockerfile is empty so the RUN cp /artifacts/app /app step is missing

ngauthier commented 9 years ago

Cool. We should probably add it. Maybe in the example also run something with the production container to show the artifact is inside without the volume.

On Sat, Sep 19, 2015 at 10:08 AM, Florian Motlik notifications@github.com wrote:

@ngauthier https://github.com/ngauthier yup thats what I thought too, but the Dockerfile is empty so the RUN cp /artifacts/app /app step is missing

— Reply to this email directly or view it on GitHub https://github.com/codeship/codeship-tool-examples/pull/10#issuecomment-141671758 .

flomotlik commented 9 years ago

OK so just to reiterate on what should be happening here according to my understanding (but isn't already):

We start an instance of compiledemo which has the tmp folder of the repository mounted as a volume. When we write to that tmp folder its written into the source repository folder on the host so its available for the next Docker deploy build (which is not as a volume, but simply because its in the folder)

Then in the deploy build we're copying the date file out of tmp/date into the container with COPY tmp/date /app/date which puts the date file (which is our artefact) into the deploy container. Currently this is not happening, but it should I assume, how else would it be getting into the container so we can push the container somewhere else.

The deploy container then has the artefact and is ready to be pushed to a registry with all files it needs to run in production (which here is only the date file)

Therefore the deploy container doesn't need to share the same data volume or use volumes_from as its not reading anything from a volume, but it already copied the date file as part of the Docker build. So the volumes_from should be removed in codeship-services.yml and the cat call in the steps.yml shouldn't reference the file in the volume, but the path that we copied the file into

According to docker/docker#14080 its not possible to use volumes during a Docker build and from my local trial the volume is also not as described above linking to the host source repository, but a named volume on the Host. Thus the file is simply not there during the Docker build and the Docker build fails.

@bfosberry could you describe the workflow you have in mind for this example without configure so we can make sure we think about the same thing. Because it seems to me at the moment this is not equivalent to what configure does.

bfosberry commented 9 years ago

So this was intended to be an example of providing an artifact to a running container. I'll extend this to add an example of building said artifact into a container to allow containers to be used as build artifacts.

There is an open question around how this should be handled. If we allow users to mount directories on the host we risk subsequent build affecting each other locally (and maybe on the hosted platform eventually). A solution to this would be to use a capistrano style subdirectory layout (tmp/builds/BUILD_ID, tmp/builds/current) which would mean we would have to enforce mounted volumes were under a certain dir, and docker builds would be able to consistently pull from tmp/builds/current. The problem with this approach is that it becomes difficult to maintain consistent artifact interaction for the user between docker builds and volume mounting. Saving a file to a volume may involve just setting A.txt, while adding it to a docker build may mean COPY tmp/builds/current/A.txt.

Rather than this approach I think it's reasonable to expect the user to manage their artifacts. All we should do is ensure that the mounted host volume is under the checkout folder, and document the fact that the user should be aware of possible old build artifacts and clean the folder before use as needed.

ngauthier commented 9 years ago

Keep in mind in the future we plan to run builds on a swarm, and there may be more than one host. What about adding a commit option to a step to allow a user to run a command that copies something from a volume into the container, then we commit it to a new image? Or maybe some other way to avoid host linking? On Sep 21, 2015 10:22 AM, "Brendan Fosberry" notifications@github.com wrote:

So this was intended to be an example of providing an artifact to a running container. I'll extend this to add an example of building said artifact into a container to allow containers to be used as build artifacts.

There is an open question around how this should be handled. If we allow users to mount directories on the host we risk subsequent build affecting each other locally (and maybe on the hosted platform eventually). A solution to this would be to use a capistrano style subdirectory layout (tmp/builds/BUILD_ID, tmp/builds/current) which would mean we would have to enforce mounted volumes were under a certain dir, and docker builds would be able to consistently pull from tmp/builds/current. The problem with this approach is that it becomes difficult to maintain consistent artifact interaction for the user between docker builds and volume mounting. Saving a file to a volume may involve just setting A.txt, while adding it to a docker build may mean COPY tmp/builds/current/A.txt.

Rather than this approach I think it's reasonable to expect the user to manage their artifacts. All we should do is ensure that the mounted host volume is under the checkout folder, and document the fact that the user should be aware of possible old build artifacts and clean the folder before use as needed.

— Reply to this email directly or view it on GitHub https://github.com/codeship/codeship-tool-examples/pull/10#issuecomment-141997180 .

bfosberry commented 9 years ago

With that in mind I'm leaning towards a flocker-powered static build artifact volume linked to jetter, that we can attach to build containers as needed, and is available during the docker build since that is attached to jetter.

This would look something like this

ngauthier commented 9 years ago

Cool. Now for local execution, would we just mount straight to the host? On Sep 21, 2015 11:07 AM, "Brendan Fosberry" notifications@github.com wrote:

With that in mind I'm leaning towards a flocker-powered static build artifact volume linked to jetter, that we can attach to build containers as needed, and is available during the docker build since that is attached to jetter.

This would look something like this

  • New build starts, new volume is created mounted to the host (and later via flocker for swarm support)
  • Jetter is started attached to the volume
  • Any time a step runs with a host volume specified, we mount to the host into the same folder jetter is using. Later we mount using flocker
  • Because these objects are available in jetter they can be used to build images

— Reply to this email directly or view it on GitHub https://github.com/codeship/codeship-tool-examples/pull/10#issuecomment-142009676 .

bfosberry commented 9 years ago

So for now we can simply mount to the folder within jetter, in the future we'll want to split it out to support flocker and swarm but that depends on the implementation of flocker.

flomotlik commented 9 years ago

@bfosberry what are the next steps that need to happen to get this set up? What needs to be changed in jet to support this directly?

Running with host source repository connected and flocker in the future (if that can be properly connected into a build container) sounds good to me.

bfosberry commented 9 years ago

1) Update this PR to include an example of building an image with artifacts 2) Enforce/namespace host volume mounts to within the checkout directory (adding story, low priority)

flomotlik commented 9 years ago

1) Update this PR to include an example of building an image with artifacts

Is that possible at the moment without any changes to jet? Can we use the source repository on the Host as a volume right now?

On 21 Sep 2015 at 21:44:24, Brendan Fosberry (notifications@github.com) wrote:

1) Update this PR to include an example of building an image with artifacts 2) Enforce/namespace host volume mounts to within the checkout directory (adding story, low priority)

— Reply to this email directly or view it on GitHub.

bfosberry commented 9 years ago

Currently on jet you can mount host volumes, but that gives the user access to the entire host so its a security risk we'll want to patch in the future. The problem is the user doesnt know what dir we'll checkout into, so it's hard for them to specify the safe directory. For now we can let them use whatever, but in the future we'll want to prepend the project dir by default.

I also had a great convo with nick about possibly allowing binary injections though Commit, rather than dockerfile. We'll probably be exploring this in the future as an alternative to host volumes

ngauthier commented 9 years ago

Also my other idea, which I like the best right now, is to use add_docker and run your compilation then run a docker build manually. IMO it's am advanced enough use case we can support it with docs and docker in docker, doesn't have to be a configurable situation. On Sep 22, 2015 5:05 PM, "Brendan Fosberry" notifications@github.com wrote:

Currently on jet you can mount host volumes, but that gives the user access to the entire host so its a security risk we'll want to patch in the future. The problem is the user doesnt know what dir we'll checkout into, so it's hard for them to specify the safe directory. For now we can let them use whatever, but in the future we'll want to prepend the project dir by default.

I also had a great convo with nick about possibly allowing binary injections though Commit, rather than dockerfile. We'll probably be exploring this in the future as an alternative to host volumes

— Reply to this email directly or view it on GitHub https://github.com/codeship/codeship-tool-examples/pull/10#issuecomment-142421495 .

flomotlik commented 9 years ago

Currently on jet you can mount host volumes, but that gives the user access to the entire host so its a security risk we'll want to patch in the future. The problem is the user doesnt know what dir we'll checkout into, so it's hard for them to specify the safe directory. For now we can let them use whatever, but in the future we'll want to prepend the project dir by default.

Agreed prepending the source repository definitely makes sense, so people can't break out of that specific repo. Would this be a massive change in the current implementation?

Also my other idea, which I like the best right now, is to use add_docker and run your compilation then run a docker build manually. IMO it's am advanced enough use case we can support it with docs and docker in docker, doesn't have to be a configurable situation.

Imho its not really an advanced feature as this is how we want people to build clean Docker containers. Its definitely a best practice to separate building the artifacts from the actual deployable container (and has come up numerous times during the Demos) so this should be something built very much into the platform to be very easy. Having to do add_docker, commiting and then pushing through Docker is too complex for a best practice that we want people to follow in my opinion. @AlexTi thoughts?

bfosberry commented 9 years ago

Build fails because there is no global steps file :P

ngauthier commented 9 years ago

Looks good. I like that volumes must be local. Seems safe.

flomotlik commented 9 years ago

@bfosberry is that working on your local system? Not working on mine (which might be due to docker-machine not being able to create locally shared folders even with the virtualbox machines)