Closed nhumrich closed 7 years ago
Hi Nick, seems you're trying to use gcloud credentials to auth to the Kubernetes cluster. The two are very different, instead of using a gce serviceaccount as the credential source in load_config you need to use your kube config file. If you can run kubectl commands locally then you have one under ~/.kube/config
---- On Tue, 23 May 2017 00:43:06 +0300 notifications@github.com wrote ----
I am using google container engine, and trying to use this to access the k8s api. Trying to follow the example on the readme
from kubernetes import client, config
config.load_kube_config() api = client.CoreV1Api() pods = api.list_pod_for_all_namespaces(watch=False)
for p in pods.items: print(p.metadata.name, p.status.phase) which gives me the following error:
Traceback (most recent call last):
File "/home/nhumrich/devops/containers/deployment/scripts/kube-deploy.py", line 6, in
Traceback (most recent call last):
File "/home/nhumrich/devops/containers/deployment/scripts/kube-deploy.py", line 19, in
Are there any examples of how I authenticate with kubernetes/google container engine so that I can get this working?
Note: one possible solution is to run gcloud auth application-default login
but that isn't automated and only works locally.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
I tried doing that which is the first option. I also tried passing in the config file location, but that doesnt change anything, as that is the default anyways.
Does kubectl
works? if yes, can you provide content of ~/.kube/config
file (remove any key/token/etc. from the file, only "current context" should be enough. e.g. if your current-context is "x" provide user "x" and cluster "x".
Here is my kube config file
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ...Something goes here...
server: https://some.url
name: gke_infastructure-111111_us-central1-a_my-cluster
contexts:
- context:
cluster: gke_infastructure-111111_us-central1-a_my-cluster
user: gke_infastructure-111111_us-central1-a_my-cluster
name: gke_infastructure-111111_us-central1-a_my-cluster
current-context: gke_infastructure-111111_us-central1-a_my-cluster
kind: Config
preferences: {}
users:
- name: gke_infastructure-111111_us-central1-a_my-cluster
user:
auth-provider:
config:
access-token: zzzzzz.some-token-here.zzzzz
cmd-args: config config-helper --format=json
cmd-path: /opt/google-cloud-sdk/bin/gcloud
expiry: 2017-05-24T19:57:52Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
+1 On this one. We are having the same problem. kubectl
is working fine, but we get the 401 when using the library. And our config file look like this one above.
Anyone can send an example of how to do it properly or is it impossible make it works with GKE clusters?
The kubeconfig loader should run refresh command to update token when it is expired. To solve this, we can find the corresponding code in go client (which kubectl uses) and port it to python. Added it to milestone to make sure it will be looked at for the next release.
Just ran into this issue myself. You can use this dirty hack to get auth on GKE in the meantime. Please note that the token retrieved will expire within a rather short timeframe. This should not be used in production.
In kube_config.py
add these imports:
import subprocess
import json
modify KubeConfigLoader
's __init__
method like this:
if get_google_credentials:
self._get_google_credentials = get_google_credentials
else:
self._get_google_credentials = lambda: (
self._get_gcp_cmd_credentials()
)
and add the following method to the class:
def _get_gcp_cmd_credentials(self):
cmd_path = self._user.value['auth-provider']['config']['cmd-path']
cmd_args = self._user.value['auth-provider']['config']['cmd-args'].split(' ')
output = subprocess.run([cmd_path]+cmd_args, stdout=subprocess.PIPE, check=False).stdout.decode('utf-8')
return json.loads(output)['credential']['access_token']
Done!
Update: I was able to work around this issue by creating a serviceaccount in kubernetes.
I then ran kubectl describe serviceaccount myserviceaccount
and that will give you a secret name, then use that secret name to run: kubectl describe secrets [secret-name]
and then copy the token
field. One you have the token field, all you need to do is set the api token in the client:
config.load_kube_config()
client.configuration.api_key['authorization'] = 'your token goes here'
client.configuration.api_key_prefix['authorization'] = 'Bearer'
This worked great for me. If you dont want to use the kube config file at all, you can also set the host and cert yourself:
client.configuration.api_key['authorization'] = 'your token goes here'
client.configuration.api_key_prefix['authorization'] = 'Bearer'
client.configuration.host = 'https://some.domain-or-ip.example'
client.configuration.ssl_ca_cert = 'cert/location.crt'
Hah I'm using the API to set up the service accounts - so it is a kind of chicken and egg problem. Of course using a service account is the preferred and more durable solution.
@zweizeichen you could always use kubectl to create a single serviceaccount, then go from there.
I am still experiencing that issue (version 5.0.0 on mac)
I am still experiencing that issue (version 5.0.0 on mac)
I am still experiencing that issue (version 6.0.0 on mac and I also tried HEAD version) in Kubernetes 1.10.1
Same issue here - version 6.0.0 on Windows/WSL (Ubuntu). kubernetes 1.10.5
The current implementation refreshes a token when configuration is loaded. It doesn't refresh it for long living applications. PTAL https://github.com/kubernetes-client/python-base/issues/59
+1 I'm confused why the library isn't able to use the Kubeconfig properly.
I'm facing this issue with version 7.0.1 (on mac), kubernetes 1.10.11. Workarounds don't help.
We are also currently experiencing the same issue +1
您的来信收到,谢谢!
why is this issue closed, the bug still exists:
File "python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 15205, in list_namespaced_event
return self.list_namespaced_event_with_http_info(namespace, **kwargs) # noqa: E501
File "python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 15320, in list_namespaced_event_with_http_info
return self.api_client.call_api(
File "python3.9/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "python3.9/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "python3.9/site-packages/kubernetes/client/api_client.py", line 373, in request
return self.rest_client.GET(url,
File "python3.9/site-packages/kubernetes/client/rest.py", line 241, in GET
return self.request("GET", url,
File "python3.9/site-packages/kubernetes/client/rest.py", line 235, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (401)
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Audit-Id': '...', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/
json', 'Date': 'Thu, 23 Feb 2023 10:15:09 GMT', 'Content-Length': '129'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
running kubectl after that refreshes the token and the api works again.
您的来信收到,谢谢!
@OraserZhang wrote twice:
您的来信收到,谢谢!
This seems to be an automated reply, it machine-translates to "Your letter was received, thank you!". Could you configure your e-mail to not auto-respond to GitHub notifications? Thanks!
这似乎是一个自动回复。你能不能把你的电子邮件配置为不自动回复GitHub的通知?谢谢!
can this ticket be reopened because the bug still exists?
I am using google container engine, and trying to use this to access the k8s api. Trying to follow the example on the readme
which gives me the following error:
If I add the GOOGLE_APPLICATION_CREDENTIALS env-var and download a google json credential file, I then get a generic 401.
If I try to add an api key (
client.configuration.api_key['authorization'] = 'AbX.....SYh'
I get another error.Are there any examples of how I authenticate with kubernetes/google container engine so that I can get this working?
Note: one possible solution is to run
gcloud auth application-default login
but that isn't automated and only works locally.