Closed brainplot closed 4 weeks ago
Is a problem with urllib version. Try to use 1.x urllib version.
I think I'm already using that.
$ pip list | grep urllib
urllib3 1.26.5
If I try to pip install
the requirements.txt
file that's provided in the repo, nothing gets installed/updated. According to pip
, my dependencies meet the version requirements.
@eloymg I have the same problem. I am using following kubeconfig file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: Base64-Encrypted Key
server: https://test.....cloud:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
token: Base 64 Token
And following three lines:
from kubernetes import client, config
config.load_kube_config('./kube_config')
v1 = client.CoreV1Api()
v1.list_pod_for_all_namespaces(watch=False)
[ WARN ] Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1000)'))': /api/v1/pods?watch=False
[ WARN ] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1000)'))': /api/v1/pods?watch=False
[ WARN ] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1000)'))': /api/v1/pods?watch=False
urllib3 version: 1.26.18
@eloymg I have the same problem too.
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='xxxxx', port=xxxx): Max retries exceeded with url: /apis/batch/v1/namespaces/default/jobs (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:1006)')))
After a bit of digging, I found out what the cause of my issue is. The problem occurs when I manually generate my Kubernetes CA certificates as intermediate certificates using a custom CA.
I followed this guide to do so.
I would like to point out that the Root CA certificate that was used to generate the intermediate CA certificates (as shown in the link above) is trusted by the machine and was placed under /usr/local/share/ca-certificates
. Like I said, kubectl
and the rest of Kubernetes in general work just fine! It's just this client that doesn't.
It's as if it expects the Kubernetes CA certificates to be root certificates, without following the trust chain.
I would like to point out that the Root CA certificate that was used to generate the intermediate CA certificates (as shown in the link above) is trusted by the machine and was placed under /usr/local/share/ca-certificates.
@brainplot Nice finding! I wonder if you would like to propose a fix?
Hi,
After reading rest.py code:
# cert_reqs
if configuration.verify_ssl:
cert_reqs = ssl.CERT_REQUIRED
else:
cert_reqs = ssl.CERT_NONE
In your code try :
from kubernetes import client, config
config.load_kube_config('./kube_config')
config.verify_ssl=False ## <<< Perhaps can be setup in config
v1 = client.CoreV1Api()
v1.list_pod_for_all_namespaces(watch=False)
It works for me (no more SSL issue), my code:
configuration = kubernetes.client.Configuration()
# Configure API key authorization: BearerToken
configuration.api_key['authorization'] = 'YOUR_API_KEY'
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['authorization'] = 'Bearer'
requests.packages.urllib3.disable_warnings()
# Defining host is optional and default to http://localhost
configuration.host = "https://10.96.0.1"
configuration.verify_ssl=False
# Defining host is optional and default to http://localhost
# Enter a context with an instance of the API kubernetes.client
api_client=kubernetes.client.ApiClient(configuration)
# Create an instance of the API class
api_instance = kubernetes.client.WellKnownApi(api_client)
I understand how that can work but there's no reason why I should disable SSL/TLS verification since my setup has a perfectly valid certificate trust chain.
Same problem here connecting to EKS v1.26 cluster using in-cluster configuration. Tried:
config.load_incluster_config()
config.verify_ssl=False
Still doesn't work:
WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SS
LError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2427)'))': /api/v1/namespaces/sfuga/configmaps
I'm using Alpine linux v3.19:
# apk info py3-urllib3
py3-urllib3-1.26.18-r0 description:
HTTP library with thread-safe connection pooling, file post, and more
py3-urllib3-1.26.18-r0 webpage:
https://github.com/urllib3/urllib3
py3-urllib3-1.26.18-r0 installed size:
580 KiB
Spent a few hours debugging this issue. It appears that the API and client are functioning as expected, but the error message is confusing for users. The issue is caused the size of the configMap.
Kubernetes configMaps have a size limit of 1MB. This limit is set by etcd
, which has a limit of 1.5MB. When the object exceeds 1MB, urllib3
returns an error that is not very clear.
In my case the file was ~12MB, so obviously doesn't fit in a configMap.
Here is a sample code to test that configMap creation works:
# Import necessary libraries
from kubernetes import client, config
# Load in-cluster configuration
config.load_incluster_config()
# Create a Kubernetes API client
v1 = client.CoreV1Api()
# Define the configmap data
data = {"data": "123"}
# Create the configmap object
configmap = client.V1ConfigMap(
api_version="v1",
kind="ConfigMap",
metadata=client.V1ObjectMeta(
name="sample"
),
data=data
)
# Create the configmap in the cluster
v1.create_namespaced_config_map(namespace="sfuga", body=configmap)
# Print success message
print("Configmap created successfully.")
@atmosx I'm honestly unsure that is relevant here. I had this issue just trying to list pods in my cluster. It's clearly something to do with the certificate the API server serves.
@brainplot
I had the same issue. I solved it by adding the certificate-authority key to my kubeconfig as mentioned in this post : https://stackoverflow.com/questions/48351308/how-to-specify-ca-bundle-in-kubernetes-python-client
@louisgls I no longer need this library thus I don't have a reason to try this. However, thank you for providing a solution.
I'm seeing the same issue when I create a cluster using a single CA certificate as intermediate as described in brainplot's comment . As this is a valid configuration described in Kubernetes' own docs and causes the minimal example described in this project's README.md
to fail, I would consider this to be a bug.
@brainplot Thanks for your excellent troubleshooting. Would you mind retitling this issue as "client doesn't follow trust chain when using single CA certificate as intermediate" or something of the sort?
Thank you @inflatador. I've updated the title and I believe the new one better describes the issue. If not, we can discuss how to clarify further.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Hello, I might be facing the same issue :
My on-premise RKE2 cluster is using an intermediate CA generated from a self-signed CA.
A pod within the cluster runs a Python script using this Kubernetes client, loading the configuration with config.load_incluster_config()
.
I get the same error about failing to verify the certificate [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate
.
It works if I skip the certificate verification with config.verify_ssl=False
, but I don't want to keep it that way for obvious security reasons.
It also works when I request the API server manually from within the pod with a curl command using the same certificate corresponding to the serviceaccount, so the Kubernetes client really is the only one not being able to verify the certificate.
Is there a fix planned in the coming versions of the client ? Can we reopen this issue ?
/reopen /remove-lifecycle rotten
@maximemf: You can't reopen an issue/PR unless you authored it or you are a collaborator.
What happened (please include outputs or screenshots): I was trying the client to obtain info about the running pods in a freshly-installed Kubernetes cluster using exactly the example provided in the
README.md
but I was hit with this SSL error:What you expected to happen: I was expecting the example to work 😄
How to reproduce it (as minimally and precisely as possible): To be honest, I'm not sure. This is a freshly installed Ubuntu machine with a freshly-installed Kubernetes cluster.
Anything else we need to know?: The cluster is generating its certificates using a custom CA that all nodes trust (thanks to the
update-ca-certificates
script), including the one I'm running this on. It should be noted thatkubectl
works perfectly fine with no issues whatsoever!Environment:
Kubernetes version (
kubectl version
):OS (e.g., MacOS 10.13.6):
Python version (
python --version
)Python client version (
pip list | grep kubernetes
)