Closed rquadling closed 8 months ago
Hey @rquadling, it sounds like you figured out what to do, and for the right reasons: when running something within a docker image, you need to provide a way for it to access the host's docker instance, which is what you are doing by exposing the docker.sock
. The error you mentioned contains this bit, which is of importance:
failed to get image descriptor from registry: GET https://index.docker.io/v2/local/hello_world/manifests/container: UNAUTHORIZED: authentication required
... you see is because when running as a container with no reference to the docker.sock
or other method of indicating how the docker daemon can connect, syft is unable to use the docker daemon and attempts a direct registry connection (note the URL). But of course, that isn't something you have access to in the docker main registry and that isn't what you want anyway because what you want to scan is a locally built image only available on the host.
That's all to say: you did the right thing. 🙌
I have tried moving the locally working setup to BitBucket (so we can ensure the SBOM is valid)... and I've got the same issue.
But this time I think this is a protection by Atlassian to stop inappropriate access to the Docker Daemon. A very detailed blog about this : https://staaldraad.github.io/post/2019-07-11-bypass-docker-plugin-with-containerd
With that, it looks like using BitBucket Pipelines, to run the official Syft container, and getting it to scan an image just made in the pipeline is not possible. Installing Syft in the pipeline is the obvious solution and was what I started with locally.
Another option is to build Syft into a "toolbox" for the pipeline, but that's not something we should be doing (containers being a single thread/service/feature is what I've been doing).
If there's nothing else to try (and I've looked and the blog is pretty clear that the mounting of the docker daemon is a no-no), then I'm done for the issue.
Thanks for the help.
@rquadling it's possible you could configure Docker to use remote access, instead of the docker.sock
: https://docs.docker.com/config/daemon/remote-access/, beyond that: you could run a local registry (maybe just with the registry:2
image) and configure syft to directly connect to that registry. But unfortunately, I don't have a lot of specifics to BitBucket or your specific setup to be able to suggest any other solutions.
@rquadling Another option might be saving off the container as a tar, and making that available to the syft image:
docker save alpine:latest > foo.tar
docker run --rm -v "$PWD:/data" anchore/syft:latest docker-archive:/data/foo.tar
This works for me, and no docker socket manipulation needed, just mounting the path to the tar as a volume into the syft container that gets run.
I think this has tradeoffs with regards to surfacing layer information, and using disk space on the worker, but if you just want syft to scan all the files that are present in the final built image, I think it will work.
👋 @rquadling I think I found the issue that was causing you some pain regarding how the sockets are setup
I filed a PR with stereoscope so the tools should now just work out of the box and find and connect to the correct socket when the system is darwin
and the old default socket path is not available:
I'm in "learning mode" and very new at this.
The following command works just fine for what I need:
The
sed
is in there to trim the trailing whitespace before pre-commit rejects the commit.This outputs
I end up with a file containing all the dependencies in a container I've just built. That file will be put into source control. This is all batched up in a simple script and is providing what we need.
BUT ... it requires the dev to install yet another tool (and that can be a lot of tools by the time you've got all the things working ... and then, keeping everyone up to date ... more work). So, transition to using the official container (as documented).
So as the image is built locally, I started with what seems an obvious replacement
But that outputs
I'm sort of lost as to how to fix this. I think this is a "correct" error. Docker in docker related? So somehow I need to run the
docker run
command with the local image being accessible in some way.This is still me learning. Part of this is to NOT just throw the image into AWS ECR and be done with it. By having the SBOM created as part of the dev workflow/pipeline, we can verify/accept anything changing within the newly built container. And we'll be pulling all the SBOMs together and so across all our images, if there's a CVE we need to deal with, which ones are affected.
Some of this MAY already. But the learning first and using second is where we are.
I did SOME research and added
-v /var/run/docker.sock:/var/run/docker.sock
.This worked! But is it right/safe/anyone still here after NOT running away scared by the newbie mistake?