kubernetes-sigs / external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Apache License 2.0
7.63k stars 2.55k forks source link

Consider also publishing to gcr.io #479

Closed drewfisher314 closed 4 years ago

drewfisher314 commented 6 years ago

Since this project is part of kubernetes incubation, Would it be beneficial to also publish the container at gcr.io as well? Having it published at the zalan.do domain raises questions around what happens if that domain should go away or have other issues.

linki commented 6 years ago

What location do you suggest?

drewfisher314 commented 6 years ago

heapster is published at gcr.io/google_containers/heapster-amd64 and heapster-influxdb is at gcr.io/google_containers/heapster-influxdb-amd64. What about there? (at gcr.io/google_containers)

ideahitme commented 6 years ago

Having it published at the zalan.do domain raises questions around what happens if that domain should go away or have other issues.

The image being published at zalan.do registry is for the sake of quick setup and play around purposes. For actually running it on production, it is entirely up to you to publish it to your company's docker registry. In this regard gcr.io could only be a better option if it is a standard registry used by all kubernetes projects for quick setup-and-run purposes.

EDIT: On a related topic I think this: https://github.com/kubernetes-incubator/external-dns/blob/master/Dockerfile#L25 could be changed to use dockerhub alpine image to keep Dockerfile neutral

hjacobs commented 6 years ago

I agree with @ideahitme, production users should always build the Docker image themselves or at least mirror it to their own registry (which could also be something managed like AWS ECR). For example, we (as Zalando) are using some images from Docker Hub, gcr.io and quay.io, but always mirror them to our own internal registry to not have any production dependencies to external registries. I don't see why one or the other registry should be "better", apart from the fact that some company might be bigger than another one :smirk:

I would welcome any (external) attempts to automatically mirror to some other registry, but I don't see us messing with credentials/accounts of different Docker registry providers.

linki commented 6 years ago

@hjacobs tbh mirroring each and every image to our own docker registry had other reasons. For a long time we used hyperkube, heapster etc. from gcr.io as well.

@drewfisher314 If you can help us in any way we're happy to have something on gcr.io in addition to our registry.

hjacobs commented 6 years ago

Closing this for now as I don't see any action item.

marshallford commented 5 years ago

Please reconsider pushing the image to Docker Hub, gcr.io, or maybe quay.io. The zalan.do registry choice made me initially wary of the project's stability, community, and significance in the k8s ecosystem. Why go out of your way to choose a strange registry for a public image?

Raffo commented 5 years ago

@marshallford I think this issue was already explained in the past: while you don't know that docker registry, the image is provided for convenience and you can easily build your own if you want. That said, it's true that most of the Kubernetes projects have k8s.io as registry nowadays and we could be thinking of asking whoever owns that registry to also set a pipeline to have images that look consistent with the rest of the official Kubernetes ecosystem.

marshallford commented 5 years ago

@Raffo I don't understand what you are getting at and you seem to be contradicting yourself. Other popular projects like service-catalog, ingress-nginx, k8s dashboard, etc all use a mainstream registry. IMO this issue isn't resolved.

Raffo commented 5 years ago

I don't see how I am contradicting myself: the registry used is a perfectly valid registry and I trust it. The source is also provided and you can easily build your own image if you don't trust it. That said, to be consistent with the offering that is provided in other official Kubernetes projects and being the k8s.io registry officially owned by the community, I support the initiative to start pushing also to k8s.io. Of course, what we need is to figure out what is needed to run a pipeline that can push to that repository. In the meantime the provided images are still valid. I am reopening the issue for now and ask the maintainers to chip in. /cc @njuettner @linki @ideahitme @hjacobs If I get all +1, I'd go on and try to find the information needed to make this happen.

njuettner commented 5 years ago

@Raffo sounds good to me 😄!

hjacobs commented 5 years ago

Just some additional information: the AWS ALB Ingress Controller also does not use k8s.io, see https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/setup/ and https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/v1.0.0/docs/examples/alb-ingress-controller.yaml#L74

So if anybody has spare time to setup and maintain an additional/new CI pipeline: OK, go for it, otherwise I don't consider it worth the effort considering that other Kubernetes projects don't adhere to the k8s.io rule either.

marshallford commented 5 years ago

Would it be possible to move to Travis CI for building/pushing? I see the delivery.yaml file in the project root, but it seems to be related to some internal pipeline? (or I'm just not familiar with the tooling being used). I'd like to submit a PR to push to multiple registries but without visibility it will be difficult (for myself and likely other outside contributors).

Raffo commented 5 years ago

Moving the build step to Travis doesn't seem to be the best approach to me, I'd rather try to get external dns to be built by the official kubernetes infrastructure which I assume would have credentials to push to the k8s.io registry.

On Mon, Dec 3, 2018, 04:29 Marshall Ford <notifications@github.com wrote:

Would it be possible to move to Travis CI for building/pushing? I see the delivery.yaml file in the project root, but it seems to be related to some internal pipeline? (or I'm just not familiar with the tooling being used). I'd like to submit a PR to push to multiple registries but without visibility it will be difficult (for myself and likely other outside contributors).

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-incubator/external-dns/issues/479#issuecomment-443578516, or mute the thread https://github.com/notifications/unsubscribe-auth/AApv1MhUKtVUdFqUGW8MjR-KltJbLiSDks5u1JqLgaJpZM4STnjx .

devlounge commented 5 years ago

Any update on this? google-containers would indeed be a perfect place to put the image(s)

Raffo commented 5 years ago

No updates, but it's on my list (from last year, but haven't forgot about it!) to ask for that and will update the thread. In the meantime, you can trust the registry or build your own image as previously commented.

Raffo commented 5 years ago

The resolution for this issue is related to the project moving out of incubation (see #540 ), so it will take additional time and effort. Rest assured that we are working on it, it just takes a long time.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

Raffo commented 5 years ago

/remove-lifecycle stale

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

timja commented 4 years ago

/remove-lifecycle stale

Raffo commented 4 years ago

Hey @njuettner, do you maybe know how to proceed with that?

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

timja commented 4 years ago

This can be closed the last release published them to:

docker run asia.gcr.io/k8s-artifacts-prod/external-dns/external-dns:v0.6.0 --help
docker run eu.gcr.io/k8s-artifacts-prod/external-dns/external-dns:v0.6.0 --help
docker run us.gcr.io/k8s-artifacts-prod/external-dns/external-dns:v0.6.0 --help
niroowns commented 4 years ago

This can be closed the last release published them to:

docker run asia.gcr.io/k8s-artifacts-prod/external-dns/external-dns:v0.6.0 --help
docker run eu.gcr.io/k8s-artifacts-prod/external-dns/external-dns:v0.6.0 --help
docker run us.gcr.io/k8s-artifacts-prod/external-dns/external-dns:v0.6.0 --help

Hi @timja - do you know why gcr.io was not used? For example, the following does NOT work:

docker run gcr.io/k8s-artifacts-prod/external-dns/external-dns:v0.6.0 --help