ansible-collections / community.kubernetes

Kubernetes Collection for Ansible
https://galaxy.ansible.com/community/kubernetes
GNU General Public License v3.0
265 stars 106 forks source link

Align helm* modules credential options with AWX built-in K8s credential type. #279

Closed tima closed 3 years ago

tima commented 3 years ago
SUMMARY

The helm modules can use a kubeconfig file path and context to access a k8s cluster. AWX is introducing a K8s built-in credential type that uses environment variables to provide access credentials like so...

        'env': {
            'K8S_AUTH_HOST': '{{ host }}',
            'K8S_AUTH_API_KEY': '{{ bearer_token }}',
            'K8S_AUTH_VERIFY_SSL': '{{ verify_ssl }}',
            'K8S_AUTH_SSL_CA_CERT': '{{ tower.filename }}',
        },

The helm modules being limited to just a kubeconfig file makes using them difficult and inconvenient to use in AWX.

All the helm modules in this collection should support access parameters that align with the built-in K8s credential type in AWX.

ISSUE TYPE
COMPONENT NAME

plugins/modules/helm*

ADDITIONAL INFORMATION

See #277

geerlingguy commented 3 years ago

@tima those types seem to be more relevant to use with OpenShift, and I think we decided to move all the Openshift-auth-specific parts out into the okd collection...

howardjones commented 3 years ago

@geerlingguy what is the intended way to pass k8s credentials from awx to (e.g.) the helm module if not this? It looks like I could generate a kubeconfig using awx custom credential types, and 18 lines of template jammed into the injector json, but that also doesn't feel optimal.

geerlingguy commented 3 years ago

@howardjones - I understand, but I'm more wondering if we might need to figure out a way to make it so modules like the helm ones work better in openshift through some change/fix in the okd collection, or pulling back some of the auth things from that collection to here.

Are you using Helm, in this case, with an OpenShift cluster?

howardjones commented 3 years ago

No, it's regular kubernetes (Azure AKS). My intention is to be able to deploy dev environments on demand to one of several clusters, so I'm aiming to get the choice of cluster into the awx job, not the playbook. Looking at community.kubernetes.k8s, it appears that module already does support these environment variables. Is the plan for that to go away?

(what's OKD?)

geerlingguy commented 3 years ago

@howardjones - Ah! Nevermind then, I was thinking of the wrong thing entirely. Ignore my comments :)

Yes, the Helm modules should be able to support the same env vars as the k8s* modules, so this is missing functionality that needs to be added.

tima commented 3 years ago

@howardjones OKD is the community distribution of Kubernetes that powers Red Hat's OpenShift: https://github.com/openshift/okd

kdelee commented 3 years ago

This will be good for awx/tower as this will make the kubernetes credentials users can attach to job templates work with the helm modules

howardjones commented 3 years ago

@kdelee This is my actual use-case. For now, I generate a kubeconfig from the passed awx credentials as a workaround:

- name: Create temporary kubeconfig from awx credentials (AWX only)
  when: tower_job_id is defined
  template:
    src: kube-config.j2
    dest: awx-kube-config
    mode: 0600

- name: Deploy Helm Chart (AWX version)
  when: tower_job_id is defined
  community.kubernetes.helm:
    context: generated-context
    kubeconfig_path: awx-kube-config
    [etc...]
---
apiVersion: v1
kind: Config
clusters:
- cluster:
    server: {{ lookup('env', 'K8S_AUTH_HOST') }}
    certificate-authority: {{ lookup('env', 'K8S_AUTH_SSL_CA_CERT') }}
  name: generated-cluster
users:
 - name: generated-user
   user:
     token: {{ lookup('env', 'K8S_AUTH_API_KEY') }}
contexts:
- context:
    cluster: generated-cluster
    namespace: default
    user: generated-user
  name: generated-context
current-context: generated-context
kdelee commented 3 years ago

@howardjones thanks for sharing that snippet, yeah I ran into this because we are going to start pulling helm in as a dependency so folks don't have to install it themselves (at least in tower, I can only assume will end up in awx container)...but when I went to verify I could use it...ran into this :D

goneri commented 3 years ago

The following PR should resolve the situation: https://github.com/ansible-collections/community.kubernetes/pull/319 Feedback are welcome.