The collection includes a variety of Ansible content to help automate the management of applications in Kubernetes and OpenShift clusters, as well as the provisioning and maintenance of clusters themselves.
Other
216
stars
135
forks
source link
k8s_info returned is successful == true when the api-server was not reachable. #508
The k8s_info module will return successful == true after the resource cache has been established during periods where communication to the api-server is not possible. The Recreate steps listed below simulate situations where a playbook contains a series of k8s module tasks. After a handful of successful k8s tasks are run, the communication to the api-server becomes problematic due to temporary/intermittent availability/communication issues. During this api-server "problematic" phase, the k8s_info based tasks continue to return successful == true with an empty resources list.
If kubectl get ... would fail due to an api-server with intermittent availability/communication problems, so should k8s_info.
ISSUE TYPE
Bug Report
COMPONENT NAME
The k8s_info module. Possibly other kubernetes.core modules experience the same behavior
I used kind to recreate but any kubernetes cluster will work.
1. kind create cluster --name test --kubeconfig /tmp/kind.kubeconfig --image kindest/node:v1.24.4
2. kubectl --kubeconfig /tmp/kind.kubeconfig create secret generic my-secret --from-literal=foo=bar
3. cp /tmp/kind.kubeconfig /tmp/botched.kubeconfig
4. Edit /tmp/botched.kubeconfig and remove the "certificate-authority-data:" line from the file.
5. ansible-playbook recreate-k8s-info-error.yml -vvv
# PLAYBOOK recreate-k8s-info-error.yml
---
- hosts: localhost
connection: local
tasks:
- name: Check for existing cluster secret with good kubeconfig
k8s_info:
api_version: v1
kind: Secret
name: 'my-secret'
namespace: 'default'
kubeconfig: '/tmp/kind.kubeconfig'
register: _secret_data_a
# The expectation is that this will result in a failed task.
# However, this will return as a successful task.
- name: Check for existing cluster secret with bad kubeconfig
k8s_info:
api_version: v1
kind: Secret
name: 'my-secret'
namespace: 'default'
kubeconfig: '/tmp/botched.kubeconfig'
register: _secret_data_b
EXPECTED RESULTS
I expect that the "Check for existing cluster secret with bad kubeconfig" task to fail.
ACTUAL RESULTS
ansible-playbook recreate-k8s-info-error.yml -v
No config file found; using defaults
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
ok: [localhost]
TASK [Check for existing cluster secret with good kubeconfig] ******************************************************************************************************************************************************
ok: [localhost] => {"api_found": true, "changed": false, "resources": [{"apiVersion": "v1", "data": {"foo": "YmFy"}, "kind": "Secret", "metadata": {"creationTimestamp": "2022-09-07T03:51:50Z", "managedFields": [{"apiVersion": "v1", "fieldsType": "FieldsV1", "fieldsV1": {"f:data": {".": {}, "f:foo": {}}, "f:type": {}}, "manager": "kubectl-create", "operation": "Update", "time": "2022-09-07T03:51:50Z"}], "name": "my-secret", "namespace": "default", "resourceVersion": "1009", "uid": "8b96bebd-8d4d-46b7-9645-5abd85fc25d3"}, "type": "Opaque"}]}
TASK [Check for existing cluster secret with bad kubeconfig] *******************************************************************************************************************************************************
ok: [localhost] => {"api_found": true, "changed": false, "msg": "Exception 'HTTPSConnectionPool(host='127.0.0.1', port=55002): Max retries exceeded with url: /api/v1/namespaces/default/secrets/my-secret?fieldSelector=&labelSelector= (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')))' raised while trying to get resource using {'name': 'my-secret', 'namespace': 'default', 'label_selector': '', 'field_selector': ''}", "resources": []}
PLAY RECAP *********************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
SUMMARY
The k8s_info module will return successful == true after the resource cache has been established during periods where communication to the api-server is not possible. The Recreate steps listed below simulate situations where a playbook contains a series of k8s module tasks. After a handful of successful k8s tasks are run, the communication to the api-server becomes problematic due to temporary/intermittent availability/communication issues. During this api-server "problematic" phase, the k8s_info based tasks continue to return successful == true with an empty resources list.
If
kubectl get ...
would fail due to an api-server with intermittent availability/communication problems, so should k8s_info.ISSUE TYPE
COMPONENT NAME
ANSIBLE VERSION
CONFIGURATION
OS / ENVIRONMENT
STEPS TO REPRODUCE
I used kind to recreate but any kubernetes cluster will work.
EXPECTED RESULTS
I expect that the "Check for existing cluster secret with bad kubeconfig" task to fail.
ACTUAL RESULTS