Closed reneforstner closed 4 years ago
@reneforstner Just to clarify because I don't see it in the snippet you supplied -- where are you delegating the k8s namespace present task to run on your bastion host? Also, in the copy task you have a dest
set to /home/{{ ace_user }}/.kube/config
but in the following task you are getting the kubeconfig from /home/ubuntu/.kube/config
. Did you meant to do that? Is ace_user
always equal to "ubuntu"?
@tima Sorry for this...{{ ace_user }} is "ubuntu" i just hardcoded this for testing purposes. I launch my playbook with an ini file which contains all the vars, as well as the host:
Calling the playbook:
ansible-playbook -i bastion.ini playbook.yml -vvvv
Start of the playbook:
- name: Configure host
hosts: acebastion
become: true
bastion.ini:
[acebastion]
ip-of-my-host ansible_user=ubuntu ansible_ssh_private_key_file=../../../key
[acebastion:vars]
ace_user=ubuntu
some other vars = some other values
ansible_python_interpreter=/usr/bin/python3
Hi again,
I just figured out that this issue is related to the newest python kubernetes module (which was released on 15th of October) Because I just installed any kubernetes module newer than 10.0.0, the latest and greatest was installed. I changed it to 11.0.0 and everything works as expected.
I'll try to figure out the issue with using the python module natively without ansible and let you know
@reneforstner Do you think this will resolve your issue - https://github.com/ansible-collections/community.kubernetes/pull/276?
@Akasurde indeed, when I do not specify any kubeconfig (which I usually do not do) i get the identical error message with the kubernetes module 12.0.0
@reneforstner Thanks for the information.
resolved_by_pr #276
SUMMARY
Remote execution of k8s functions is not working
ISSUE TYPE
COMPONENT NAME
community.kubernetes.k8s
ANSIBLE VERSION
CONFIGURATION
Ansible Controller Host --> Bastion Host --> AWS EKS Kubernetes Cluster
What I want to archive is quite simple: I create an AWS EKS Cluster and a bastion host with Terraform and Terraform triggers a playbook which is configuring my bastion host as well as the EKS Cluster. BUT all k8s commands should be executed from the bastion host and not from the host, running the playbook (due to access limitations).
I promise I did not change anything and it worked until last week
OS / ENVIRONMENT
Ansible Controller-Host:
AWS EKS-Cluster:
STEPS TO REPRODUCE
In the past I used the depricated k8s module for ansible to configure an AWS EKS Cluster. Since last week for some reason both k8s modules (the depricated and the community version) are trying to execute them from my ansible controller host instead of the host where it should be executed.
affected snippet of the playbook:
EXPECTED RESULTS
Namespace gets created - at least some more detailed info on the error.
ACTUAL RESULTS
Beside the fact, that the previous version where OK with having the kubeconfig on my remote host, it now needs to reside on my ansible controller host, the playbook fails with:
"Failed to load kubeconfig due to Invalid kube-config file. No configuration found." - even with -vvvv
The used kubeconfig file looks like this - if I try to use this kube-config with kubectl everything works as expected: