joshjohanning / joshjohanning.github.io

josh-ops.com | a devops blog
https://josh-ops.com
MIT License
8 stars 0 forks source link

Docker Container Jobs in GitHub Actions | josh-ops #26

Open utterances-bot opened 11 months ago

utterances-bot commented 11 months ago

Docker Container Jobs in GitHub Actions | josh-ops

Getting started using a Docker Container to run your GitHub Actions Job, tips and tricks, troubleshooting, and caveats

https://josh-ops.com/posts/github-container-jobs/

mirabilos commented 11 months ago

But this does not work because it tries to run the checkout in the container instead of normally, which fails because the container doesn’t have nodejs or something.

(Attempting to use a Debian slim container.)

mirabilos commented 10 months ago

I found a way to do it instead. But it requires putting the actual commands all into one script file (because GitHub fucks up quoting, you really can’t do them inline) and using a container for one step of the job.

joshjohanning commented 10 months ago

But this does not work because it tries to run the checkout in the container instead of normally, which fails because the container doesn’t have nodejs or something.

Ahh yeah, that would be a problem if your container doesn't have node, because it's going to run the entire job inside of the container (checkout action and all). I'm not sure of any way today to be able to have that part run outside of the container.

Have you ever explored Service Containers? It might not be much different than what you are doing today, but it might be a more "GitHub-native" way to do it. Your step could then exec into that container, perhaps map in the current workspace, and then run your script.

But either way nice workaround, thank you for sharing @mirabilos! 🙇

mirabilos commented 10 months ago

Josh Johanning dixit:

But this does not work because it tries to run the checkout in the container instead of normally, which fails because the container doesn’t have nodejs or something.

Ahh yeah, that would be a problem if your container doesn't have _node, because it's going to run the entire job inside of the _container (checkout action and all).

Yeah, that’s scary.

Have you ever explored Service Containers?

No… I really want to run the main build in the environment, no separate services. But…

But either way nice workaround,

… I found a way that works, even if it means condensing all steps that use the i386 environment into one shell script and running that this way. It was no problem for me.

thank you for sharing @mirabilos! 🙇

You’re welcome! And I wouldn’t have gotten this far without your article either, so thank you there.

bye, //mirabilos -- „Cool, /usr/share/doc/mksh/examples/uhr.gz ist ja ein Grund, mksh auf jedem System zu installieren.“ -- XTaran auf der OpenRheinRuhr, ganz begeistert (EN: “[…]uhr.gz is a reason to install mksh on every system.”)

simesy commented 4 months ago

I have found this works well and my container is automatically networked to additional service containers. However I need to run one command inside on of the service containers...

Your step could then exec into that container, perhaps map in the current workspace, and then run your script.

So say if I'm using a node container as my job container. I assume I have a simple problem that it doesn't have the docker binary. And if I chose a docker image as my job container, it would not have node.

simesy commented 3 months ago

I'm no expert, but i'm going to have a stab at this take... corrections welcome.

The strategy in this blog is cool since all steps will be automatically networked with the default network bridge. But I think the reason this isn't used a lot is that the image you use needs for your container would need to have docker executable in it if you want to run anything like docker exec on other service containers.

Where as if you leave everything as default you have a lot of tools at your disposal and can run docker commands. But then you need to add your own custom network, rather than the default bridge, if you want to connect to services running in any of the service containers.

joshjohanning commented 3 months ago

But I think the reason this isn't used a lot is that the image you use needs for your container would need to have docker executable in it if you want to run anything like docker exec on other service containers.

Correct @simesy! Also, other pain points I have seen is the startup command (has to be built into Docker image and not passed in) and credentials. For example, it is challenging to use an image from a non-GitHub private registry using job or service containers since there is no way to provide the authentication to the job before it runs.

Where as if you leave everything as default you have a lot of tools at your disposal and can run docker commands. But then you need to add your own custom network, rather than the default bridge, if you want to connect to services running in any of the service containers.

you hit the nail on the head with the network! We have built out something that sort of mimics the network that is automatically created for you:

Image

container:
      image: node:21-bookworm   # Your image here  
      volumes:
        # Mount the path the runner is using to share the socket into the job
        - /run/docker/docker.sock:/run/docker/docker.sock
    env:
      # And tell the steps with Docker CLI usage how to reach the socket 
      DOCKER_HOST: unix:///run/docker/docker.sock

run: |
   # Find the network that starts with the prefix and return its name
   NETWORK=$(docker network ls  --filter name=github_network_ --format '{{.Name}}')

   # Spin up a container in the background and connect it to the job container's network
   # Give the service a name so we can find it ... and so the container can later be killed
   docker run -d --publish 6379:6379 --name myredis --network $NETWORK redis

   # Give it time to spin up. Docker compose has built-in wait support ... and can coordinate a network and multiple containers :-)   
   sleep 10

   # Send a PING to the DNS name "myredis" ... the container.
   echo ping | nc -q 1 myredis 6379

This will put all of the docker containers you run manually into the same network, just as if they were all job/service containers.