pelotech / drone-helm3

Plugin for drone to deploy helm charts using helm3
Apache License 2.0
31 stars 36 forks source link

Upgrade command seems to not use my kube_service_account parameter #103

Closed tolgap closed 4 years ago

tolgap commented 4 years ago

This is the error I get when doing a deploy using mode: upgrade:

Error: query: failed to query with labels: secrets is forbidden: User "system:serviceaccount:order66-staging:default"
cannot list resource "secrets" in API group "" in the namespace "order66-staging"
--

Based on that error, it seems like drone-helm3 is using the serviceaccount named default in the order66-staging namespace. But that is definitely not what I'm passing in:

- name: deploy
  image: pelotech/drone-helm3
  settings:
    mode: upgrade
    chart: ./helm
    values_file: [ "./helm/staging/values.yaml" ]
    release: order66
    kube_api_server: 10.245.0.1 # ClusterIP of my kubernetes api server
    kube_token:
      from_secret: kubernetes_token
    kube_service_account: order66 # <- Literally passing the service account here, seems to be ignored?
    namespace: order66-staging
    wait_for_upgrade: true
    force_upgrade: true
    skip_tls_verify: true

This is my Service Account:

# kubectl get serviceaccount order66 -n order66-staging -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    meta.helm.sh/release-name: order66
    meta.helm.sh/release-namespace: order66-staging
  creationTimestamp: "2020-11-26T13:03:38Z"
  labels:
    app.kubernetes.io/instance: order66
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: order66
    app.kubernetes.io/version: 0.0.1
    helm.sh/chart: order66-0.1.0
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/instance: {}
          f:app.kubernetes.io/managed-by: {}
          f:app.kubernetes.io/name: {}
          f:app.kubernetes.io/version: {}
          f:helm.sh/chart: {}
    manager: Go-http-client
    operation: Update
    time: "2020-11-26T13:03:38Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:secrets:
        .: {}
        k:{"name":"order66-token-npfjg"}:
          .: {}
          f:name: {}
    manager: kube-controller-manager
    operation: Update
    time: "2020-11-26T13:03:38Z"
  name: order66
  namespace: order66-staging
  resourceVersion: "1069117"
  selfLink: /api/v1/namespaces/order66-staging/serviceaccounts/order66
  uid: 89959f10-a2b8-4198-b8b5-e11a17eaab68
secrets:
- name: order66-token-npfjg

This is my ClusterRole:

# kubectl get clusterrole secret-reader -n order66-staging -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    meta.helm.sh/release-name: order66
    meta.helm.sh/release-namespace: order66-staging
  creationTimestamp: "2020-11-26T13:28:28Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  managedFields:
  - apiVersion: rbac.authorization.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/managed-by: {}
      f:rules: {}
    manager: Go-http-client
    operation: Update
    time: "2020-11-26T13:28:28Z"
  name: secret-reader
  resourceVersion: "1073212"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/secret-reader
  uid: c552a159-f7ee-4e4a-b285-325e2d7dd4dc
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - watch
  - list
  - create
  - update

And I have definitely bound that clusterrole to my service account using a RoleBinding:

$> kubectl auth can-i get secrets --namespace=order66-staging --as="system:serviceaccount:order66-staging:order66"
yes

So what could be going wrong here?

ErinCall commented 4 years ago

Huh, I'm not sure why it's doing that. Will you do a run with debug: true and paste the results?

One other thing that jumped out at me is your values_file setting—it should be values_files, plural. That probably isn't causing this problem, but it will cause other problems later on :)

tolgap commented 4 years ago

@ErinCall that's very sharp, I hadn't noticed that typo.

Once I fixed the values_files typo, I can see that my kubernetes_service_account is actually being used. I haven't changed anything else.

All I had to do was make sure that service account was allowed to perform all the steps by assigning it a ClusterRoleBinding to the ClusterRole admin for the namespace order66-staging.

I'm closing this issue. Thanks again for the quick response!