Open moltar opened 5 years ago
hey @moltar I'll take a look at this tonight, thanks for bringing it up :) let me know if you find anything in the meantime.
@moltar do you mind giving it a try using the godaddy client directly? https://github.com/godaddy/kubernetes-client
Want to see if it's a problem with passing through or if it's lower-level.
Hey @ajpauwels thanks for getting back.
I did mention in my original ticket that:
The same config works fine directly with
kubernetes-client
.
By that I did mean https://github.com/godaddy/kubernetes-client
@moltar I pushed a PR (#2) that might address it. I can't reproduce bug as my clusters don't use username/password, but I suspect the issue is with the API mapping. It's possible your cluster requires basic auth for API discovery. I've added a conditional which adds the auth if the current-context user has username/password.
If you can try modifying the file directly in your node_modules/easy-k8s/apimap.js
let me know.
And sorry for the confusion above.
Hey @moltar is there any chance you were able to try this out? I can't merge and promote it without confirmation.
Still getting Unauthorized
.
I am actually testing this with the dind (docker in docker) setup:
This is from K3s project: https://k3s.io/
But I am running the Docker command programmatically, without the k3s client, like so:
docker run --name k3s_default -e K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml --publish 6443:6443 --privileged -d rancher/k3s:v0.1.0 server --https-listen-port 6443
But using the client is probably easier for one off tests.
Actually, wait, I didn't realize that this was unpublished.
I installed the PR version, and now I am getting:
TypeError: reqChain.post is not a function
at reqChain.patch.catch (node_modules/easy-k8s/client.js:230:20)
console.log src/kubectl.ts:47
Namespace {
apiVersion: 'v1',
kind: 'Namespace',
metadata: { name: 'flux', labels: { name: 'flux' } } }
console.log node_modules/easy-k8s/client.js:230
templatedEndpoints, parameters, template, splits, backend, getNames, children, pods, po, pod, services, svc, service, persistentvolumeclaims, pvc, persistentvolumeclaim, replicationcontrollers, rc, replicationcontroller, resourcequotas, quota, resourcequota, configmaps, cm, configmap, endpoints, ep, endpoint, events, ev, event, limitranges, limits, limitrange, podtemplates, podtemplate, secrets, secret, serviceaccounts, serviceaccount, bindings, binding, finalize, status, pathItemObject, get, getStream, put, delete, patch
I did some "dumps". First is the object I am trying to updateOrCreate
.
The second is just reqChain
object keys (console.log(Object.keys(reqChain).join(', '))
) at the point of failure.
Looks like there is no post
method at all.
I think I see the problem now:
const namespace = resourceSpec.metadata.namespace;
The method assumes that every object is namespaced. This, of course, isn't always true for namespace resource itself. So the solution is to issue a different request for kind: Namespace
to create or update that, I think.
Just to confirm about the original issue of user/pass auth. I confirm that this PR fixes that specific issue.
The following code does work and produces a list of pods:
const K8s = require('easy-k8s').Client
const { readFileSync } = require('fs')
const kubeconfig = JSON.parse(readFileSync(process.env.KUBECONFIG, { encoding: 'utf8' }))
kubeconfig.clusters[0].cluster['insecure-skip-tls-verify'] = true
K8s.get(kubeconfig, 'all', 'pod').then(allPodSpecs => {
console.log(allPodSpecs)
})
@moltar awesome, good to hear, I'll take a look at the namespacing bug and try and get it all fixed and promoted today.
Can't seem to get the auth to work. Keep getting
statusCode: 401
error.Here's my config:
Client code:
The same config works fine directly with
kubernetes-client
.Any ideas?
Thanks!