keel-hq / keel

Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates
https://keel.sh
Mozilla Public License 2.0
2.45k stars 282 forks source link

Helm provider: Polling a repository from behind a corporate proxy #338

Open mrsimpson opened 5 years ago

mrsimpson commented 5 years ago

I have a helm chart pulling an image from a private docker repository. An image pull secret has been specified in the chart, it has also been added to the values.yaml in the keel property:

keel:
  # keel policy (all/major/minor/patch/force)
  policy: force
    # trigger type, defaults to events such as pubsub, webhooks
  trigger: pull
  # polling schedule
  pollSchedule: "@every 2m"
  # images to track and update
  images:
    - repository: image.repository
      tag: image.tag
      imagePullSecret: image.imagePullSecret

However, It seems as if Keel was not able to see the image: time="2019-01-23T10:11:26Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm

The image pull secret (of course) resides in the namespace of the deployment. Since it's used by multiple deployments, it's not part of a chart (and thus does not feature labels).

When triggering the webhook from Dockerhub, I can see some more strange log I cannot understand:

time="2019-01-23T09:58:58Z" level=info msg="provider.kubernetes: processing event" registry= repository=assistify/operations tag=latest
time="2019-01-23T09:58:58Z" level=info msg="provider.kubernetes: no plans for deployment updates found for this event" image=assistify/operations tag=latest

Have I got something completely wrong?

rusenask commented 5 years ago

Hi, seems like Keel can't talk to your helm tiller service. Which namespace have you used to deploy keel?

mrsimpson commented 5 years ago

kube-system

| => kubectl get service -n kube-system
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
keel                      ClusterIP   10.101.76.90     <none>        80/TCP                        25m
kube-dns                  ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP                 43d
kubernetes-dashboard      ClusterIP   10.103.78.171    <none>        443/TCP                       42d
tiller-deploy             ClusterIP   10.110.118.148   <none>        44134/TCP                     41d

Can I somehow very connectivity from inside the keel-pod?

mrsimpson commented 5 years ago

Maybe I should add I set env-variables HTTP*_PROXY on the Keel pod in order to be able to poll via our corporate proxy. The proxy however is inactive within the cluster (exception for 10.* et. al.)

tinyproxy-conf

no upstream ".kube-system"
no upstream ".default"
no upstream ".utils"
no upstream "10.0.0.0/8"
no upstream "172.16.0.0/12"
rusenask commented 5 years ago

The problem here is that keel can't connect to tiller so it doesn't get image list which it should start tracking. There's an env variable:

TILLER_ADDRESS

that defaults to tiller-deploy:44134. It does seem to be correct service though. Could there be something inside your cluster that prevents it from calling tiller service?

mrsimpson commented 5 years ago

Yes, I verified it somehow is the proxy bothering me. I removed it and now am not able to poll, but I don't get the error message msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm anymore. 👍

The error messages error msg="provider.helm: failed to get config for release" error="policy not specified" namespace=kube-system release=... indicate Keel can talk to Tiller now.

Polling however is not possible now. Any in-built-option to poll behind a proxy?

rusenask commented 5 years ago

That error is not from the HTTP client that tries to get images, it just indicates that there was no policy in the chart that it found. From registry client I would expect to see HTTP errors trying to contact the registry.

Maybe instead of tiller-deploy:port you could specify IP address of the tiller? Have never tried it, but this might work (or whatever the tiller-deploy IP resolves to):

TILLER_ADDRESS=10.110.118.148:44134
mrsimpson commented 5 years ago

@rusenask maybe I should be a bit more verbose, I think I understood most of it now.

Our setting is a cluster in a VPC behind a corporate proxy. We have a private repository on Dockerhub. I wanted to poll Dockerhub instead of exposing a public webhook, so I configured keel as per the documentation. When doing this (Keel based on Helm chart version 0.7.6), I get the following error log:

time="2019-01-23T12:43:26Z" level=error msg="trigger.poll.RepositoryWatcher.Watch: failed to add image watch job" error="Get https://index.docker.io/v2/assistify/operations/manifests/latest: dial tcp: lookup index.docker.io on 10.96.0.10:53: no such host" image="namespace:assistify,image:index.docker.io/assistify/operations,provider:helm,trigger:poll,sched:@every 2m,secrets:[assistify-private-registry]"
time="2019-01-23T12:43:26Z" level=error msg="trigger.poll.manager: got error(-s) while watching images" error="encountered errors while adding images: Get https://index.docker.io/v2/assistify/operations/manifests/latest: dial tcp: lookup index.docker.io on 10.96.0.10:53: no such host"

I assumed this to being an issue of the corporate proxy, so I modified the Keel helm chart locally so that the keel deployment gets values which it propagates to the environment variables HTTP*_PROXY. After I did this, it seems as if Keel was not able to talk to Tiller anymore. I verified the proxy which runs inside our cluster: This proxy should establish direct connections within the cluster and route to the corporate proxy for other resources.

But no matter how I specify the TILLER_ADDRESS and though this actually matches exceptions on the proxy conf, I get msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm anymore. which as far as I understood indicates non-connectivity to Tiller.

I also tried to set NO_PROXY env variable on the Keel-pod, but this has no effect either.

So instead of specifying the proxy via environment (which obviously troubles the Keel-Tiller-connectivity), is there a way to poll registries from behind a proxy?

rusenask commented 5 years ago

You can use my other project webhookrelay.com: https://keel.sh/v1/guide/documentation.html#Receiving-webhooks-without-public-endpoint

It works by creating a connection to the cloud service and any webhooks are streamed back to the internal network on top of that tunnel and through a sidecar it would just call keel on http://localhost:9300/v1/webhooks/dockerhub endpoint. It provides additional security when compared to just exposing your service to the internet by only allowing one-way traffic and only to a specific server & path. There's a free tier of 150 webhooks/month but I can bump it up a bit if you like it.

mrsimpson commented 5 years ago

Yup, I have seen that as well. I first wanted to go for a self-sustained solution - simply since ordering SaaS in our company is a burden I’m not willing to take... Any chance to get polling enabled?

rusenask commented 5 years ago

well, one option is to make your proxy always route to the tiller-deploy for tiller queries. Another, easier option would be to use k8s provider instead of helm. Just add keel policy to your chart's deployment.yaml template annotations and disable helm provider altogether. Then, the only queries to outside world will be for the registry.

mrsimpson commented 5 years ago

Ah, this means I can set http_proxy - and the only thing I’ll lose is the decoupling of deployment and keel-config. Sounds like a plan, let me check later. Thanks for the awesome support and the amazing tool!

mrsimpson commented 5 years ago

using the k8s provider works as expected 👍 However, i'm keeping this open with the changed subject: Proxy support should somehow be worked out.

Nevertheless: Awesome work, @rusenask 🎉

botzill commented 5 years ago

Hi.

I also get this issue:

time="2019-03-06T16:51:07Z" level=debug msg="tracked images" images="[]"
time="2019-03-06T16:51:07Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-06T16:51:12Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm

My tiller is running ok but not sure why we can't connect to it

screen shot 2019-03-06 at 19 15 10

My connection endpoint:

tillerAddress: "tiller-deploy.kube-system:44134"

rusenask commented 5 years ago

is keel in the same namespace?

botzill commented 5 years ago
screen shot 2019-03-06 at 19 41 23

Yes it is, both in kube-system

botzill commented 5 years ago

So, as I understand I don't need to specify any

keel.sh/policy: major
keel.sh/trigger: poll  

by default, helm provider will watch all the images deployed via helm charts?

rusenask commented 5 years ago

yes, as long as there's a keel config in the values.yaml of your chart: https://keel.sh/v1/guide/documentation.html#Helm-example

botzill commented 5 years ago

Well yes it's enabled, here is it:

keel:
  # keel policy (all/major/minor/patch/force)
  policy: all
  # trigger type, defaults to events such as pubsub, webhooks
  trigger: poll
  # polling schedule
  pollSchedule: "@every 3m"
  # images to track and update
#  images:
#    - repository: image.repository
#      tag: image.tag
rusenask commented 5 years ago

in this case it won't track anything, those

#  images:
#    - repository: image.repository
#      tag: image.tag

shouldn't be commented out and they should be targetting the other helm variables that have image name and tag :)

botzill commented 5 years ago

Yes, I thought that this is the reason and tried without them.

I did add those back but still, same errors.

time="2019-03-07T14:41:59Z" level=debug msg="added deployment kube-eagle" context=translator
time="2019-03-07T14:41:59Z" level=debug msg="added deployment tiller-deploy" context=translator
time="2019-03-07T14:41:59Z" level=debug msg="added deployment external-dns" context=translator
time="2019-03-07T14:41:59Z" level=debug msg="added deployment nginx-ingress-controller" context=translator
time="2019-03-07T14:41:59Z" level=debug msg="added deployment kubernetes-replicator" context=translator
time="2019-03-07T14:42:02Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:02Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:05Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:10Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:10Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:10Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:15Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:15Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:15Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:20Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:20Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:20Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:25Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:25Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:25Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:30Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:30Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:30Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:35Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:35Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:35Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:40Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:40Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:40Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:45Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:45Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:45Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:50Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:50Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:50Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:55Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:55Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:55Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:43:00Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:43:00Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:43:00Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:43:05Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:43:05Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:43:05Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:43:10Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm

So, these options are only for auto update? Can I set approval in this section as well?

botzill commented 5 years ago

So, the added time="2019-03-07T14:41:59Z" level=debug msg="added part indicates that it was connected to tiller. But still, I have this errors

level=debug msg="tracked images" images="[]"
time="2019-03-07T14:43:05Z" level=debug msg="trigger.poll.manager: performing scan"

Tested with this config:

keel:
  # keel policy (all/major/minor/patch/force)
  policy: all
  # trigger type, defaults to events such as pubsub, webhooks
  trigger: poll
  # polling schedule
  pollSchedule: "@every 1m"
  # approvals required to proceed with an update
  approvals: 1
  # approvals deadline in hours
  approvalDeadline: 24
  # images to track and update
  images:
    - repository: image.repository
      tag: image.tag
botzill commented 5 years ago

Maybe there is no updates for any images? Should I see any log which indicates that all are uptodate? Or do I receive a message in slack about uptodate?

Thx.

rusenask commented 5 years ago

Hi, no, unfortunately it seems that Keel cannot connect to Helm:

time="2019-03-07T14:43:00Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=hel

Could there be some internal network restrictions? Do you use a proxy internally that Keel has to use as well?

botzill commented 5 years ago

I think there are no restrictions, here is how I deploy it:

https://github.com/botzill/terraform-tinfoil-tiller/blob/master/main.tf

service

kube-system    tiller-deploy                       ClusterIP      10.245.156.186   <none>         44134/TCP                                                                  10h
k describe service tiller-deploy -n kube-system
Name:              tiller-deploy
Namespace:         kube-system
Labels:            app=helm
                   name=tiller
Annotations:       <none>
Selector:          app=helm,name=tiller
Type:              ClusterIP
IP:                10.245.156.186
Port:              tiller  44134/TCP
TargetPort:        tiller/TCP
Endpoints:         10.244.93.2:44134,10.244.93.4:44134
Session Affinity:  None
Events:            <none>

Thx.

botzill commented 5 years ago

Is there any way I can debug this and see what is going on?

klrservices commented 5 years ago

Looks like changing the tiller address (removing the cluster.local suffix) fixes the problem:

--set-string helmProvider.tillerAddress="tiller-deploy.kube-system:44134"

botzill commented 5 years ago

Hi @klrservices I do change this and still not working.

@rusenask I'm running this on a DigitalOcean k8s cluster. I see that they use, out of the box, https://cilium.io/. Could this be an issue why it can't connect? I really want to make it work.

Thx.

rusenask commented 5 years ago

unlikely, if both keel and tiller are in the same namespace, it should be reachable. Can you try

--set-string helmProvider.tillerAddress="tiller-deploy:44134"

?

botzill commented 5 years ago

Thx @rusenask

I did try with that new address but still having this issues:

time="2019-03-29T10:30:43Z" level=info msg="extension.credentialshelper: helper registered" name=aws
time="2019-03-29T10:30:43Z" level=info msg="bot: registered" name=slack
time="2019-03-29T10:30:43Z" level=info msg="keel starting..." arch=amd64 build_date=2019-02-06T223140Z go_version=go1.10.3 os=linux revision=0944517e version=0.13.1
time="2019-03-29T10:30:43Z" level=info msg="extension.notification.slack: sender configured" channels="[k8s-stats]" name=slack
time="2019-03-29T10:30:43Z" level=info msg="notificationSender: sender configured" sender name=slack
time="2019-03-29T10:30:43Z" level=info msg="provider.kubernetes: using in-cluster configuration"
time="2019-03-29T10:30:43Z" level=info msg="provider.helm: tiller address 'tiller-deploy:44134' supplied"
time="2019-03-29T10:30:43Z" level=info msg="provider.defaultProviders: provider 'kubernetes' registered"
time="2019-03-29T10:30:43Z" level=info msg="provider.defaultProviders: provider 'helm' registered"
time="2019-03-29T10:30:43Z" level=info msg="extension.credentialshelper: helper registered" name=secrets
time="2019-03-29T10:30:43Z" level=info msg="trigger.poll.manager: polling trigger configured"
time="2019-03-29T10:30:43Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:30:43Z" level=info msg="webhook trigger server starting..." port=9300
time="2019-03-29T10:30:44Z" level=info msg=started context=buffer
time="2019-03-29T10:30:44Z" level=info msg=started context=watch resource=deployments
time="2019-03-29T10:30:44Z" level=info msg=started context=watch resource=cronjobs
time="2019-03-29T10:30:44Z" level=info msg=started context=watch resource=daemonsets
time="2019-03-29T10:30:44Z" level=info msg=started context=watch resource=statefulsets
time="2019-03-29T10:30:45Z" level=debug msg="added cronjob mongodb-backup-job" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment cert-manager" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment hpa-operator-kube-metrics-adapter" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment kubernetes-dashboard" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment cilium-operator" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment coredns" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment cert-manager-cainjector" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment nginx-ingress-default-backend" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment tiller-deploy" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment some-api" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment external-dns" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment jobs-seeker" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment kube-eagle" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment prometheus-grafana" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment cert-manager-webhook" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment hpa-operator-hpa-operator" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment kubernetes-replicator" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment metrics-server" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment kubedb-operator" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment longhorn-driver-deployer" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment longhorn-ui" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment prometheus-kube-state-metrics" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment prometheus-prometheus-oper-operator" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment keel" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset mongodb-primary" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset mongodb-secondary" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset alertmanager-prometheus-prometheus-oper-alertmanager" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset prometheus-prometheus-prometheus-oper-prometheus" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset invoiceninja" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset mysqldb" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset mongodb-arbiter" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset cilium" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset csi-do-node" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset do-node-agent" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset kube-proxy" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset engine-image-ei-6e2b0e32" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset longhorn-manager" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset prometheus-prometheus-node-exporter" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset nginx-ingress-controller" context=translator
time="2019-03-29T10:30:48Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:30:48Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:30:51Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:30:56Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:30:56Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:30:56Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:01Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:01Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:01Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:06Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:06Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:06Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:11Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:11Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:11Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:16Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:16Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:16Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:21Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:21Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:21Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:26Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:26Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:26Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:31Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:31Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:31Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:36Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:36Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:36Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:41Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:41Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:41Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:46Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:46Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:46Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:51Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:51Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:51Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:56Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:56Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:56Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:32:01Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:32:01Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:32:01Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:32:06Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:32:06Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:32:06Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:32:11Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:32:11Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:32:11Z" level=debug msg="trigger.poll.manager: performing scan"
botzill commented 5 years ago

Hi @rusenask, any other hints where I could check for this?

rusenask commented 5 years ago

hi, is your environment also behind a proxy?

botzill commented 5 years ago

What do you mean behind a proxy? How can I check that? It's a k8s cluster on DigitalOcean. So, I guess there is no proxy.

rusenask commented 5 years ago

alright, just thought about it because the title of the issue contains corporate proxy and usually grpc traffic just gets blocked if the proxy doesn't recognize it. Could you try deploying it with the same configuration anywhere else? like a minikube, the same version Keel and Tiller.

botzill commented 5 years ago

Hi @rusenask, I did try it on a custom cluster and seems to work OK.

I'm wondering what can be wrong about DO cluster then?

Anishmourya commented 5 years ago

Keel must be same in namespace where tiller service defined . helm upgrade --install keel --namespace=kube-system keel-charts/keel --set helmProvider.enabled="true"
worked for me while my tiller installed on kube-system .

emoxam commented 2 years ago

I'am using Helm v3.9.4 and there is no tiller. But i got the same error. Autoupdate of deployments work but the error fills the log

time="2022-10-03T16:11:58Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm time="2022-10-03T16:12:03Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm

emoxam commented 2 years ago

Ok. Manual at https://keel.sh/docs/#deploying-with-helm lies

  1. helm upgrade --install keel --namespace=keel keel-charts/keel --set helmProvider.enabled="false" doesn't work because keel-charts doesn't exists
  2. namespace keel doesn't exist so i used helm upgrade --install keel --namespace=kube-system keel/keel --set helmProvider.enabled="false" But i am not sure i need to use kube-system namespace. It culd be anought to use default namespace.

btw https://keel.sh/docs/#deploying-with-kubectl doesn't work too not kubectl apply -f https://sunstone.dev/keel?namespace=keel&username=admin&password=admin&tag=latest nor kubectl apply -f https://sunstone.dev/keel?namespace=default&username=admin&password=admin&relay_key=TOKEN_KEY&relay_secret=TOKEN_SECRET&relay_tunnel=TUNNEL_NAME&tag=latest