GNS3 / gns3-registry

GNS3 devices registry
GNU General Public License v3.0
330 stars 395 forks source link

Use GitHub package registry to publish Docker images #468

Closed grossmj closed 1 year ago

grossmj commented 5 years ago

Need to research this: https://help.github.com/en/github/managing-packages-with-github-package-registry/configuring-docker-for-use-with-github-package-registry

ghost commented 2 years ago

I created a demo repository https://github.com/b-ehlers/gns3-docker-build, that uses GitHub Actions to build Docker images and publish them on the GitHub Docker Registry. By changing the arguments to the docker login action in .github/workflows/build.yml it is also possible to push the Docker images to the "normal" DockerHub.

The build system in .github/bin/docker_build rebuilds an image only, when the subdirectory containing the Dockerfile has changed, or its base image has been updated.

Interested?

grossmj commented 2 years ago

Looks good. Definitely interested :)

grossmj commented 2 years ago

Thanks again for sharing this. I will focus on it once we release GNS3 v3.0 :)

ghost commented 2 years ago

I know, GNS3 3.0 is not yet released. I just want to share my current knowledge.

Docker supports any repository, that implements https://docs.docker.com/registry/spec/api/. To use a non-DockerHub repository you just need to insert its hostname. For example ghcr.io/b-ehlers/ipterm refers to the ipterm image of user b-ehlers in the GitHub container registry ghcr.io. All Docker applications, including GNS3, support this extended scheme.

To push an image to a repository, first you have to login with docker login <registry>. Then issue a series of docker build and docker push to build and push your images. When the support for multiple architectures is needed, docker buildx build should be used instead. The simplest way would be to put these commands in a script file and execute it, when needed. That script may run on a local computer or in the cloud.

DockerHub additionally offers a build system, that makes it quite easy to build and publish images. But this service requires a paid subscription since mid 2021. For occasional use, it is probably too expensive. So also on DockerHub docker push is the normal way to go.

Both a script and the DockerHub build system has the disadvantage, that they build all images of a git repository and that can take quite some time. Typically they run on any change in the git repository. To improve that, I created a Docker build system, that builds only the images that changed. Details can be found at https://github.com/b-ehlers/gns3-docker-build/blob/master/docker_build.md. It works with any Docker registry. I have tested it with DockerHub and the GitHub container registry ghcr.io. It should run on any system, that supports Python. It was tested locally and with GitHub Actions.

ghost commented 1 year ago

Last week I noticed a large drawback of my solution: As I haven't found a software or python module, that returns the docker image information in a way I need, my tool directly uses the docker registry API. But this API sometimes gets some (mostly small) changes, that need some attention. And that's the problem. I can't guarantee that I'm going to support this software for the next years.

So my suggestion is to use the second best solution: Create a script file in the docker subdirectory, that builds all docker images and uploads them to the registry the GNS3 devs want to use. Then use GitHub actions to run this build script. The drawback is, that the build takes quite some time, but that shouldn't be a big deal. The big advantage is, that this is very simple to implement and it doesn't depend on an additional piece of software, that someone has to maintain.

So I'm going to withdraw my docker_build solution.

ghost commented 1 year ago

Found a python module to access a Docker registry: python-dxf. I have modified my code to use that module and the first tests are promising. That solves my issue to implement (and support) the registry access myself.

But as the milestone of 3.2 is years ahead, I won't hurry. When the time has come for implementation, feel free to contact me.

grossmj commented 1 year ago

But as the milestone of 3.2 is years ahead, I won't hurry. When the time has come for implementation, feel free to contact me.

I think we should prioritize this issue and add implement something on GitHub Actions to automatically build and push Docker containers when needed. For now we can push to Docker Hub and then later, GitHub packages as well.

ghost commented 1 year ago

So how can I help? I suggest, that I use my old gns3-docker-build code and create a fork of this registry (and optionally a PR). This gives you a build system using GitHub Actions, that rebuilds the docker images, that are changed or their base image were changed, and upload the images to a registry. You can then test it and make any changes you like. In a second step this build system could be enhanced to support uploading to multiple registries.

Alternatively rebuilding all images on any change is even simpler. Create a simple build script and run that via GitHub actions. This needs a longer build time and more CPU time. And it would run on any change in this registry, even when non-docker appliances are changed. But as long as you are not charged for GitHub Actions it doesn't matter.

grossmj commented 1 year ago

So how can I help? I suggest, that I use my old gns3-docker-build code and create a fork of this registry (and optionally a PR). This gives you a build system using GitHub Actions, that rebuilds the docker images, that are changed or their base image were changed, and upload the images to a registry. You can then test it and make any changes you like. In a second step this build system could be enhanced to support uploading to multiple registries.

Your old gns3-docker-build code is not accessible anymore. Also, you mentioned python-dxf and that your first tests were promising. How close the implementation is ready to be used?

Alternatively rebuilding all images on any change is even simpler. Create a simple build script and run that via GitHub actions. This needs a longer build time and more CPU time. And it would run on any change in this registry, even when non-docker appliances are changed. But as long as you are not charged for GitHub Actions it doesn't matter.

That could be a solution. I am not too sure about not being charged for GitHub Actions, I think we get like 2000 minutes free per month? I would have to check. I just prefer something that build only if needed because it is more elegant... Alternatively, we could also have a manual GitHub Action trigger to rebuild and push the images similar to what we have to refresh and publish the API documentation for v3: https://github.com/GNS3/gns3-server/actions/workflows/publish-api-documentation.yml

ghost commented 1 year ago

Your old gns3-docker-build code is not accessible anymore. Also, you mentioned python-dxf and that your first tests were promising. How close the implementation is ready to be used?

Correct, I deleted gns3-docker-build as I didn't like to have some unused code for the next years in my github repository.

The change for using python-dxf is done, but needs some more testing.

My main issue is, who will support this? Of course, I will be in charge. But what happens, when I loose interest, lets say in 2 years? Are you willing to take over? If yes, fine, lets go that way. If no, it's better to use the simple brute-force solution.

grossmj commented 1 year ago

My main issue is, who will support this? Of course, I will be in charge. But what happens, when I loose interest, lets say in 2 years? Are you willing to take over? If yes, fine, lets go that way. If no, it's better to use the simple brute-force solution.

No worries, we can support this 👍

ghost commented 1 year ago

OK, I will create a fork of gns3-registry including my docker build system in the next days.

ghost commented 1 year ago

Well, I am faster than expected.

I created a fork b-ehlers/gns3-registry-docker with the build system added. First I used it to upload to the GitHub Container Registry ghcr.io, then I changed the workflow to upload to DockerHub. So both registries are tested.

Some docker appliances need a dummy layer to get an updated container timestamp, I have updates those. But there are a lot of docker appliances, that are outdated or can't be built, see the comments in docker/docker_images. So there is a lot to do.

The build system supports multi-platform build, for testing I enabled it for ovs-snmp. But that results in longer build times. Depending on the popularity of the Apple Silicon platform you can enable it for more appliances. But it requires, that the base image also supports the platform linux/arm64.

If you want to test it in your repository:

For an upload to ghcr.io you have to grant read/write permissions to GITHUB_TOKEN (Registry Settings / Actions / General/Workflow permissions / Read and write permissions). No longer necessary

For an upload to DockerHub you have to create an access token in DockerHub with read/write/delete permissions (Account Settings / Security / New Access Token). Then in the GitHub registry settings (Registry Settings / Secrets and variables / Actions / New repository secret) add repository secrets: "DOCKER_REGISTRY" containing the name of the GitHub registry. If you want to modify private docker images then additionally the "DOCKERHUB_USERNAME" containing the DockerHub username and "DOCKERHUB_TOKEN" containing the DockerHub access token must be defined.

So how to go on? Do you want to have a look first? Or shall I create a PR and you decide them.

grossmj commented 1 year ago

Awesome, I think you can go ahead and create a PR.

Just one question, do you rebuild the images on a.schedule too?

  schedule: 
     - cron: '37 7 * * 3'

Thanks a lot 👍

ghost commented 1 year ago

Just one question, do you rebuild the images on a.schedule too?

  schedule: 
     - cron: '37 7 * * 3'

Yes, there are 3 ways a rebuild is triggered:

grossmj commented 1 year ago

I had a quick review, excellent work on the Docker build system 👍

To enable ghcr.io, do I just need to remove the comments on the following lines?

https://github.com/GNS3/gns3-registry/pull/767/files#diff-2817fc9ec6443e19233164eefa86576a0972ce81bcf0ec4a04f96d6119db038eR39-R40

https://github.com/GNS3/gns3-registry/pull/767/files#diff-2817fc9ec6443e19233164eefa86576a0972ce81bcf0ec4a04f96d6119db038eR51-R52

Thanks again.

ghost commented 1 year ago

I have to agree, that this part of the workflow file is confusing.

The Dockerhub statements are included twice, one time uncommented and therefore active and one time commented in the DockerHub section. Therefore I added a new commit, that removes these double entries and make the configuration clearer (hopefully).

If you want to change that config to upload to the GitHub container registry, you now have to comment the statements in the DockerHub section and uncomment the Github container registry statements.

A diff of that would look like that:

diff --git a/.github/workflows/build-docker-images.yml b/.github/workflows/build-docker-images.yml
index fd20e7e..945251c 100644
--- a/.github/workflows/build-docker-images.yml
+++ b/.github/workflows/build-docker-images.yml
@@ -1,4 +1,4 @@
-name: Build Docker images and upload to DockerHub
+name: Build Docker images and upload to GitHub Container Registry ghcr.io
 on:
   push:
     branches:
@@ -33,13 +33,13 @@ jobs:
         uses: docker/login-action@v2
         with:
           # GitHub Container Registry:
-          # registry: ghcr.io
-          # username: ${{ github.repository_owner }}
-          # password: ${{ secrets.GITHUB_TOKEN }}
+          registry: ghcr.io
+          username: ${{ github.repository_owner }}
+          password: ${{ secrets.GITHUB_TOKEN }}
           #
           # DockerHub:
-          username: ${{ secrets.DOCKERHUB_USERNAME }}
-          password: ${{ secrets.DOCKERHUB_TOKEN }}
+          # username: ${{ secrets.DOCKERHUB_USERNAME }}
+          # password: ${{ secrets.DOCKERHUB_TOKEN }}
       - name: Install python requirements
         run: python3 -m pip install --requirement .github/bin/requirements.txt
       - name: Build and push images
@@ -47,11 +47,11 @@ jobs:
           # DOCKER_PASSWORD is optional, only needed for private repositories
           #
           # GitHub Container Registry:
-          # DOCKER_ACCOUNT: ghcr.io/${{ github.repository_owner }}
+          DOCKER_ACCOUNT: ghcr.io/${{ github.repository_owner }}
           # DOCKER_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
           #
           # DockerHub:
-          DOCKER_ACCOUNT: ${{ secrets.DOCKERHUB_USERNAME }}
+          # DOCKER_ACCOUNT: ${{ secrets.DOCKERHUB_USERNAME }}
           # DOCKER_PASSWORD: ${{ secrets.DOCKERHUB_TOKEN }}
           #
           IMAGES: ${{ inputs.images }}

As stated in the comments, the DOCKER_PASSWORD statement is optional, it's only needed for private docker repositories. It doesn't hurt, when it gets enabled on public docker repositories.

For using the GitHub container registry see https://github.com/GNS3/gns3-registry/issues/468#issuecomment-1565587503:

For an upload to ghcr.io you have to grant read/write permissions to GITHUB_TOKEN (Registry Settings / Actions / General/Workflow permissions / Read and write permissions).