Open piotrkubisa opened 6 years ago
Hi @piotrkubisa,
You're correct in that the host's docker.sock
is being mounted in the guest container. This allows Docker builds to take advantage of the host's Docker layer cache, preserving intermediate layers between local builds, instead of losing them whenever the guest container terminates.
The tradeoff is that directories from the guest container cannot be mounted via docker run
, because the mount paths are resolved on the host, not in the container, as you've noted.
We've filed this as a feature request for the ability to choose between the two behaviors. Thanks for reporting this!
Any update (or workarounds) on this one? I have similar requirement to the OP.
@micklove I guess you could just change the following line (remove volume share to docker.sock
between host and guest) and run your modified shell script:
Edit: My bad, it won't work, because it will report problems with connecting to the docker service. I don't know any workaround to satisfy VOLUME [/var/lib/docker]
step... maybe exporting docker container (i.e. via docker image save amazon/aws-codebuild-local > local-cb.tar
) to copy /LocalBuild
contents and recreate image without that step will help (since image is not so complicated - docker history --no-trunc amazon/aws-codebuild-local
)?
For our builds we need the code path mounted at minimum. I was able to resolve this by referencing the volume directly in compose, so we now have:
volumes:
- /var/lib/docker/volumes/agent-resources_user_volume/_data/srcDownload/src:/src
This is an ok-ish workaround for us for now, but we would like to see this implemented in a more intuitive/native way as we now have to parameterize a bunch of low level things to test builds locally which kind of blows the whole point of being able to build locally.
FYI: For somebody still is interested in running a mimicked CodeBuild
locally, but wants more room to do some changes. Quite recently, I have updated the https://github.com/piotrkubisa/localcb to compute a docker
command (via using localcb run --dry-run
combo) based on input buildspec.yml
file and CLI arguments. I believe in that way, it is easier to customize and transparent in what is actually happening (as long as you don't need to use finally
in stages).
I get an exit status 2 when i run - docker run -v $(pwd)/target:/zap/wrk/:rw -t owasp/zap2docker-stable zap-full-scan.py -t $url -g gen.conf -r report.html in codebuild. Has there been any workaround yet?
For our builds we need the code path mounted at minimum. I was able to resolve this by referencing the volume directly in compose, so we now have:
volumes: - /var/lib/docker/volumes/agent-resources_user_volume/_data/srcDownload/src:/src
This is an ok-ish workaround for us for now, but we would like to see this implemented in a more intuitive/native way as we now have to parameterize a bunch of low level things to test builds locally which kind of blows the whole point of being able to build locally.
This doesn't seem to work for me. the agent-resources_user_volume
volume's src
directory on the host is always empty when I run codebuild_build.sh
. As a workaround, I've set an extra variable in my local environment file so the build knows when it's running in a local codebuild, and I have a script that populates the path on the host before running Codebuild. When running remotely, it will get the path from inside the Codebuild container.
codebuild.env.template
:
LOCAL_CODEBUILD=true
SOURCE_DIR=${SOURCE_DIR}
run_local_codebuild.sh
:
#/bin/bash
# Resolve source directory on the host because the local Codebuild agent uses the host's Docker daemon
sed "s|\${SOURCE_DIR}|/path/to/src/on/host|" codebuild.env.template > codebuild.env
# Run Codebuild locally
codebuild_build.sh -i aws/codebuild/standard:5.0 -s /path/to/src/on/host -e codebuild.env
buildspec.yaml
:
...
phases:
build:
- |
if [ ! "${LOCAL_CODEBUILD}" = true ]
then
SOURCE_DIR="/path/to/src/in/container"
fi
...
Docker-in-Docker-in-Docker sounds like fun. I am author of the localci which I unfortunately (also fortunately) developed few days before public release of AWS CodeBuild Local Builds. When I read blog post with announcement I was a bit sad but I attempted to finish it to learn how I can manage Docker containers using Golang. Thanks to it I have noticed there is the same problem I had during development
localci
.I wanted to prepare tough test for
localci
if mybuildspec.yml
will be parsed correctly and also will properly execute CodeBuild job as it would on production server. It usesaws/codebuild/docker:17.09.0
image and in phase there is a command to run a next Docker container (docker-in-docker-in-docker) with a shared volume to compile a Go binary (link to example).Frankly, it might not be complicated for no-gophers but I wanted tough test, huh? In AWS CodeBuild Local Builds it will report following error:
It seemed to me similar, because I had the same problem during development the
localci
- problem with shared volumes:Empty directory on guest-guest docker? It has also shed some light on the issue. In
localci
I had to remove the docker.sock as a shared volume (https://github.com/piotrkubisa/localci/pull/2/commits/4d078748683ba01484940ed53378454a447f7c46) between host and guest docker. Then I noticed everything started working just like CodeBuild in AWS cloud.Today, I tried to replace logic in
localci
to use AWS CodeBuild Local Builds instead of current state of art and I was also eager to try it after the announcement. Sadly, I need to move on to next project and wait for fixes. I look forward to updates in changelog related to this issue.