ajpauwels / easy-k8s

MIT License
17 stars 0 forks source link

Password based config doesn't seem to work. #1

Open moltar opened 5 years ago

moltar commented 5 years ago

Can't seem to get the auth to work. Keep getting statusCode: 401 error.

Here's my config:

{
  apiVersion: 'v1',
  clusters: [
    {
      cluster: {
        'certificate-authority-data': 'xxx',
        server: 'https://127.0.0.1:57857',
        'insecure-skip-tls-verify': true,
      },
      name: 'default',
    },
  ],
  contexts: [
    {
      context: {
        cluster: 'default',
        user: 'default',
      },
      name: 'default',
    },
  ],
  'current-context': 'default',
  kind: 'Config',
  preferences: {},
  users: [
    {
      name: 'default',
      user: {
        password: 'xx',
        username: 'xx',
      },
    },
  ],
}

Client code:

Client.get(cfg, 'default', 'pod')
  .then(res => console.log(res))
  .catch(err => console.error(err))

The same config works fine directly with kubernetes-client.

Any ideas?

Thanks!

ajpauwels commented 5 years ago

hey @moltar I'll take a look at this tonight, thanks for bringing it up :) let me know if you find anything in the meantime.

ajpauwels commented 5 years ago

@moltar do you mind giving it a try using the godaddy client directly? https://github.com/godaddy/kubernetes-client

Want to see if it's a problem with passing through or if it's lower-level.

moltar commented 5 years ago

Hey @ajpauwels thanks for getting back.

I did mention in my original ticket that:

The same config works fine directly with kubernetes-client.

By that I did mean https://github.com/godaddy/kubernetes-client

ajpauwels commented 5 years ago

@moltar I pushed a PR (#2) that might address it. I can't reproduce bug as my clusters don't use username/password, but I suspect the issue is with the API mapping. It's possible your cluster requires basic auth for API discovery. I've added a conditional which adds the auth if the current-context user has username/password.

If you can try modifying the file directly in your node_modules/easy-k8s/apimap.js let me know.

And sorry for the confusion above.

ajpauwels commented 5 years ago

Hey @moltar is there any chance you were able to try this out? I can't merge and promote it without confirmation.

moltar commented 5 years ago

Still getting Unauthorized.

I am actually testing this with the dind (docker in docker) setup:

This is from K3s project: https://k3s.io/

But I am running the Docker command programmatically, without the k3s client, like so:

docker run --name k3s_default -e K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml --publish 6443:6443 --privileged -d rancher/k3s:v0.1.0 server --https-listen-port 6443

But using the client is probably easier for one off tests.

moltar commented 5 years ago

Actually, wait, I didn't realize that this was unpublished.

I installed the PR version, and now I am getting:

 TypeError: reqChain.post is not a function

      at reqChain.patch.catch (node_modules/easy-k8s/client.js:230:20)
moltar commented 5 years ago
  console.log src/kubectl.ts:47
    Namespace {
      apiVersion: 'v1',
      kind: 'Namespace',
      metadata: { name: 'flux', labels: { name: 'flux' } } }

  console.log node_modules/easy-k8s/client.js:230
    templatedEndpoints, parameters, template, splits, backend, getNames, children, pods, po, pod, services, svc, service, persistentvolumeclaims, pvc, persistentvolumeclaim, replicationcontrollers, rc, replicationcontroller, resourcequotas, quota, resourcequota, configmaps, cm, configmap, endpoints, ep, endpoint, events, ev, event, limitranges, limits, limitrange, podtemplates, podtemplate, secrets, secret, serviceaccounts, serviceaccount, bindings, binding, finalize, status, pathItemObject, get, getStream, put, delete, patch

I did some "dumps". First is the object I am trying to updateOrCreate.

The second is just reqChain object keys (console.log(Object.keys(reqChain).join(', '))) at the point of failure.

Looks like there is no post method at all.

moltar commented 5 years ago

I think I see the problem now:

    const namespace = resourceSpec.metadata.namespace;

The method assumes that every object is namespaced. This, of course, isn't always true for namespace resource itself. So the solution is to issue a different request for kind: Namespace to create or update that, I think.

moltar commented 5 years ago

Just to confirm about the original issue of user/pass auth. I confirm that this PR fixes that specific issue.

The following code does work and produces a list of pods:

const K8s = require('easy-k8s').Client

const { readFileSync } = require('fs')

const kubeconfig = JSON.parse(readFileSync(process.env.KUBECONFIG, { encoding: 'utf8' }))
kubeconfig.clusters[0].cluster['insecure-skip-tls-verify'] = true

K8s.get(kubeconfig, 'all', 'pod').then(allPodSpecs => {
  console.log(allPodSpecs)
})
ajpauwels commented 5 years ago

@moltar awesome, good to hear, I'll take a look at the namespacing bug and try and get it all fixed and promoted today.

moltar commented 5 years ago

The namespace bug I think is outside the scope of this PR. It’s a separate issue.

Sent with GitHawk