GNS3 / gns3-registry

GNS3 devices registry
GNU General Public License v3.0
323 stars 387 forks source link

Docker Build System #767

Closed b-ehlers closed 1 year ago

b-ehlers commented 1 year ago

Docker build system as described in https://github.com/GNS3/gns3-registry/issues/468 and .github/docker_build.md.

For pushing to DockerHub the repository secrets "DOCKERHUB_USERNAME" and "DOCKERHUB_TOKEN" must be set, see https://github.com/GNS3/gns3-registry/issues/468#issuecomment-1565587503.

Currently quite some docker images can't be build, see the comments in docker/docker_images. These images need some care.

b-ehlers commented 1 year ago

In commit 3386ba0 the repository secrets DOCKERHUB_USERNAME and DOCKERHUB_TOKEN were undefined.

Commit b954ae5 shows a bug. Docker wants lowercase repository names, but the docker build system uses the GitHub repository name as it is. This fails for the GNS3 repository.

$ docker pull ghcr.io/GNS3/test
invalid reference format: repository name must be lowercase

Solution: Either make the DOCKER_ACCOUNT variable lowercase in the workflow or in the build system.

grossmj commented 1 year ago

In commit https://github.com/GNS3/gns3-registry/commit/3386ba0e08c2c3d6e76f5b40eb9f3573c9835ef8 the repository secrets DOCKERHUB_USERNAME and DOCKERHUB_TOKEN were undefined.

Maybe the secrets haven't been added to the repo yet? I don't have the rights to add secrets from the GitHub web interface however I could add them from the command line by following this article https://devopsjournal.io/blog/2022/11/02/GitHub-secrets-without-admin-rights

$ gh secret list
NAME                UPDATED
DOCKERHUB_TOKEN     about 32 minutes ago
DOCKERHUB_USERNAME  about 34 minutes ago

Solution: Either make the DOCKER_ACCOUNT variable lowercase in the workflow or in the build system.

Just noticed that, I will force it to be lower case. Thanks 👍

grossmj commented 1 year ago

Hum I am not sure I am doing it the right way, maybe the PR needs to be merged for the build to work? Technically the branch is on your side right? b-ehlers:master

b-ehlers commented 1 year ago

Commit 3386ba0: I fear I can't help with this. The logs show, that no username and password is used. docker_build_err_3386ba0

Commit 72c9137: Have you granted read/write permissions to GITHUB_TOKEN? I got the same error before doing that in https://github.com/b-ehlers/gns3-registry-docker/actions/runs/5099280405/jobs/9166879279

b-ehlers commented 1 year ago

Hum I am not sure I am doing it the right way, maybe the PR needs to be merged for the build to work? Technically the branch is on your side right? b-ehlers:master

I am not sure as well. Maybe you can fork my repository b-ehlers/gns3-registry-docker to a new repository either here in GNS3 or in your own repository grossmj and see what happens.

grossmj commented 1 year ago

I am not sure as well. Maybe you can fork my repository b-ehlers/gns3-registry-docker to a new repository either here in GNS3 or in your own repository grossmj and see what happens.

I think that's why I got "403 Forbidden" errors. Things work better after forking: https://github.com/grossmj/gns3-registry-docker.

grossmj commented 1 year ago

Do you think we should publish images to both Docker Hub and ghcr.io?

b-ehlers commented 1 year ago

I am not sure as well. Maybe you can fork my repository b-ehlers/gns3-registry-docker to a new repository either here in GNS3 or in your own repository grossmj and see what happens.

I think that's why I got "403 Forbidden" errors. Things work better after forking: https://github.com/grossmj/gns3-registry-docker.

That are good news. Also setting the packages scope permission in the workflow is much better than requiring the user to grant read/write permissions to GITHUB_TOKEN. I just tested on my repository. I reset the permissions of GITHUB_TOKEN to read and a build with an upload to ghcr.io is still working.

grossmj commented 1 year ago

I merged and the images have been built and pushed on Docker Hub successfully :) Thanks again for your PR.

I will try to find time to deal with the other problematic images.

b-ehlers commented 1 year ago

Do you think we should publish images to both Docker Hub and ghcr.io?

The con, of course, is that the build script currently can't handle that. Some tests are needed, if a copy from the main repository to the backup is preferable. Or a second build with the backup repository is easier. The cache of docker buildx should make the second build very fast.

On the pro side, we get some redundancy if one of the docker repository providers change their policy for hosting docker images.

b-ehlers commented 1 year ago

I merged and the images have been built and pushed on Docker Hub successfully :) Thanks again for your PR.

That's fine. I assume you are using a temporary DockerHub repository to not spoil the offical GNS3 docker repository. The offical https://hub.docker.com/u/gns3 doesn't show any changes.

I will try to find time to deal with the other problematic images.

Honestly I would change the policy for appliances. It can't be your job to maintain all appliances. Every appliance has a maintainer and it's his job to keep the appliance in a good state. If an appliance is in bad shape, create an issue and mail the maintainer. If he doesn't fix the issue within a certain time, delete the appliance or (if important) take it over.

grossmj commented 1 year ago

On the pro side, we get some redundancy if one of the docker repository providers change their policy for hosting docker images.

That's exactly what I was thinking, especially there are pull limits on Docker Hub ("Anonymous and Free Docker Hub users are limited to 100 and 200 container image pull requests per six hours") so it is not unrealistic to hit that limit if an image is popular. So provider an alternative is good imo.

That's fine. I assume you are using a temporary DockerHub repository to not spoil the offical GNS3 docker repository. The offical https://hub.docker.com/u/gns3 doesn't show any changes.

Actually no, I noticed that too and I am wondering if there is an issue somewhere. I assume it worked cause I could see this kind of messages in Actions:

#7 exporting to image
#7 pushing layers 15.6s done
#7 pushing manifest for docker.io/***/kalilinux:latest@sha256:8ea014df100b028df44154d39cdf6b4bdb7d61f1c4c59393c80d5c7178967f8f
#7 pushing manifest for docker.io/***/kalilinux:latest@sha256:8ea014df100b028df44154d39cdf6b4bdb7d61f1c4c59393c80d5c7178967f8f 0.8s done
#7 DONE 146.2s

Honestly I would change the policy for appliances. It can't be your job to maintain all appliances. Every appliance has a maintainer and it's his job to keep the appliance in a good state. If an appliance is in bad shape, create an issue and mail the maintainer. If he doesn't fix the issue within a certain time, delete the appliance or (if important) take it over.

I very much agree, I can't possibly maintain all appliances. I will review the current policy and update accordingly :+1:

grossmj commented 1 year ago

Actually no, I noticed that too and I am wondering if there is an issue somewhere. I assume it worked cause I could see this kind of messages in Actions.

I found the problem, I actually use the user ***** to push on Docker Hub. This user is under the gns3 org. We should upload on the org, not the user.

b-ehlers commented 1 year ago

On the pro side, we get some redundancy if one of the docker repository providers change their policy for hosting docker images.

That's exactly what I was thinking, especially there are pull limits on Docker Hub ("Anonymous and Free Docker Hub users are limited to 100 and 200 container image pull requests per six hours") so it is not unrealistic to hit that limit if an image is popular. So provider an alternative is good imo.

I don't think the current rate limit for DockerHub is an issue. It restricts the number of image downloads a user can do, not the number of downloads of an image. So I can't download more than 100/200 containers in 6 hours. But others can continue downloading images. There is no limit how often an image is downloaded by multiple people.

But that policy might change and then it's fine to have alternatives.

grossmj commented 1 year ago

We should upload on the org, not the user.

Looks good now: https://hub.docker.com/u/gns3 \o/

grossmj commented 1 year ago

I don't think the current rate limit for DockerHub is an issue. It restricts the number of image downloads a user can do, not the number of downloads of an image. So I can't download more than 100/200 containers in 6 hours. But others can continue downloading images. There is no limit how often an image is downloaded by multiple people.

That's right, I read too quickly ;)

b-ehlers commented 1 year ago

We should upload on the org, not the user.

Looks good now: https://hub.docker.com/u/gns3 \o/

Yes, that's better.

But I think the build script needs an update. Currently the environment variable has two roles: It is used as the docker repository name to create the full docker image name. And it is used (for private repositories) as a username to login. For "normal" users that's fine. But it doesn't work for organizations like GNS3.

I propose to add a DOCKER_REPOSITORY environment variable, that serves the current role of DOCKER_ACCOUNT. Furthermore add a DOCKER_USERNAME that together with DOCKER_PASSWORD will be used to optionally login to the docker registry. The combined DOCKER_ACCOUNT can then go.

Shall I create the change or do you want to it?

grossmj commented 1 year ago

Shall I create the change or do you want to it?

Yes please go ahead and open a new PR, thanks :+1: