conjure-up / spells

conjure-up spells registry
79 stars 37 forks source link

k8s: support federation, merge config.JUJU_MODEL into separate contexts in a single config #61

Open ktsakalozos opened 7 years ago

ktsakalozos commented 7 years ago

When you create multiple clusters you get multiple config. files under .kube.

As a result if you do a kubectl config get-contexts you do not see your cluster. You actually may see an old cluster.

In the case where you want to have a federation of clusters this becomes rather problematic.

adam-stokes commented 7 years ago

The thought here is that you run ~/bin/kubect.JUJU_MODEL so that you always get the correct cluster for whatever model you are inspecting. Are you saying this is not happening?

ktsakalozos commented 7 years ago

The kubernetes user is expected to be able to do a kubectl config get-contexts to list all the clusters he has available and set the active one with kubectl config use-context my-cluster-name. This is particularly useful in the context of a cluster federation. In a federation you have many clusters and you can issue commands to each one by setting the --context param in kubectl (eg kubectl --context=federation create -f ingress/k8shserver.yaml). The approach we have now with ~/bin/kubect.JUJU_MODEL is ok for a single cluster environment. Federation was an alpha feature in v1.5 but is available now.

adam-stokes commented 7 years ago

Ok, can we discuss the best course of action to handle this in our spells? Bring in @marcoceppi, @chuckbutler , @johnsca for more input

adam-stokes commented 7 years ago

First question I have is: Should we be merging all configs into a single ~/.kube/config file?

mbruzek commented 7 years ago

I think we should discuss having one config file with many contexts. We can use the kubectl config subcommand to create contexts for each environment rather than having separate files. It is my understanding now that kubectl is snapped up we are installing that via snap. The spell could be modified to add these contexts to the config (using the kubectl config subcommand). The tough part would be removing configs that are conjure-down or removed by juju.

adam-stokes commented 7 years ago

@mbruzek ok this sounds good, I think we can use the same naming convention that we do now fo r the separate files and just clean those up on a conjure-down (this feature needs to be added in a generic way depending on what deployments are there)

marcoceppi commented 7 years ago

I have a script for this already, I'll pastebin it shortly.

marcoceppi commented 7 years ago
#!/usr/bin/python3
#
# Copyright 2017 Canonical, Ltd. Authored by Marco Ceppi
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#  GNU General Public License for more details.

import os
import sys
import yaml

def get_kubeconfig(cfg):
    if not os.path.exists(cfg):
        return

    with open(cfg) as f:
        return yaml.safe_load(f.read())

def merge_cfg(cfg, new_cfg):
    if not isinstance(cfg, dict) or not isinstance(new_cfg, dict):
        return None
    for k, v in new_cfg.iteritems():
        if k not in cfg:
            cfg[k] = v
        elif isinstance(v, list):
            cfg[k] = cfg[k] + v
        else:
            cfg[k] = merge_cfg(cfg[k], v)

    return cfg

def merge_config(kubecfg_file, newcfg_file, name):
    kubecfg = get_kubeconfig(kubecfg_file)
    newcfg = get_kubeconfig(newcfg_file)

    if not kubecfg or not newcfg:
        print(kubecfg, kubecfg_file)
        return 1

    newcfg['clusters'][0]['name'] = '{}-cluster'.format(name)
    newcfg['users'][0]['name'] = '{}-user'.format(name)
    newcfg['contexts'][0]['name'] = name
    newcfg['contexts'][0]['context']['cluster'] = '{}-cluster'.format(name)
    newcfg['contexts'][0]['context']['user'] = '{}-user'.format(name)

    kubecfg = merge_cfg(kubecfg, newcfg)

    with open(kubecfg_file, 'r+') as f:
        f.seek(0)
        f.write(yaml.dump(kubecfg, default_flow_style=False))
        f.truncate()

if __name__ == '__main__':
    KUBEHOME = os.path.expanduser('~/.kube')
    KUBECONFIG_FILE = os.path.join(KUBEHOME, 'config')

    if len(sys.argv) != 3:
       print('Where da args at?' if len(sys.argv) < 3 else 'Too many args bruv')
       sys.exit(1)

    NEWCONFIG_FILE = os.path.expanduser(sys.argv[1])
    NAME = sys.argv[2]

    rc = merge_config(KUBECONFIG_FILE, NEWCONFIG_FILE, NAME) or 0
    sys.exit(rc)

This is what we use at the linux foundation when creating a large number of clusters we expect one user to interact with. The first config that gets setup is run through sed to replace ubuntu and juju with appropriate names. Each subsequent ~/.kube/config.... are then merged into that one with this script.

There are some implications, however, since we can't just use the model name as it's usually over 35 characters long (that is woefully long to hand type every time and no autocomplete on kubectl). Instead we should aim to have a context which is short and sweet.

I'd prefer models and context to be named something like cdk-XXX or k8s-XXX where XXX is a short rand char to avoid collisions.

Alternatively, having the user provide a name for the cluster after the deployment but before downloading bins might be helpful. They've already done the hard parts (installing the cluster) it's just a matter of providing an alias for this deploy. The model would remain conjure-up... but the context / user / cluster names would be based on the alias.

adam-stokes commented 7 years ago

There are some implications, however, since we can't just use the model name as it's usually over 35 characters long (that is woefully long to hand type every time and no autocomplete on kubectl). Instead we should aim to have a context which is short and sweet.

I'd prefer models and context to be named something like cdk-XXX or k8s-XXX where XXX is a short rand char to avoid collisions.

+1

Alternatively, having the user provide a name for the cluster after the deployment but before downloading bins might be helpful. They've already done the hard parts (installing the cluster) it's just a matter of providing an alias for this deploy. The model would remain conjure-up... but the context / user / cluster names would be based on the alias.

We could definitely do this in a step.

Ok, so what I'll do is review this script and see how to best incorporate it into the spells and solicit you guys feedback. I'd like to start on this early next week so that we can have something finalized (at the very least) by EOW.

mbruzek commented 7 years ago

Actually @marcoceppi wouldn't it be better if we just distilled these into actual kubectl config commands? That way if the data format changes we could be less broken.

kubectl config --kubeconfig={0} set-cluster {1} --server={2} --certificate-authority={3} --embed-certs=true
kubectl config --kubeconfig={0} set-credentials {1} --client-key={2} --client-certificate={3} --embed-certs=true
kubectl config --kubeconfig={0} set-context {1} --cluster={2} --user={3}

Something like this ^

marcoceppi commented 7 years ago

Considering you have to parse the kube config file that we download to again update a new one it seems that both methods are fraught with problems. What happens when the command line changes? It feels simpler to just programatically merge files. Neither method solves future proofing.

ktsakalozos commented 7 years ago

This issue+PR could be related: https://github.com/kubernetes/kubernetes/issues/38791

And here is nother tool to merge config files: https://github.com/Collaborne/load-kubeconfig

ktsakalozos commented 7 years ago

@battlemidget did you have the chance to look at the script provided?

Thanks

adam-stokes commented 7 years ago

not yet, it's on my todo