Closed sylvainOL closed 3 years ago
@sylvainOL - One quick question: for the copy task, you're using the folded scalar option:
content: >
apiVersion: apps/v1
Kind: Deployment
Metadata:
name: nginx-deployment
labels:
app: nginx
That seems like it would result in an invalid file being copied into place, with contents like:
apiVersion: apps/v1 Kind: Deployment Metadata: name: nginx-deployment labels: app: nginx
Can you try using the literal scalar instead, so newlines are preserved?
content: |
apiVersion: apps/v1
Kind: Deployment
Metadata:
name: nginx-deployment
labels:
app: nginx
Hi @geerlingguy, the file is OK when I look at it:
cat /tmp/deploy.yml
apiVersion: apps/v1 Kind: Deployment Metadata:
name: nginx-deployment
labels:
app: nginx
Spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
But I can make the test with |
(Actually, I've created this bug report by simplifying a playbook I'm working on where the files are retrieved by url --> https://gitlab.com/Orange-OpenSource/lfn/infra/kubernetes-monitoring-role/-/blob/ansible_collection/tasks/prometheus-stack-helmv2.yml)
cat /tmp/deploy.yml
apiVersion: apps/v1
Kind: Deployment
Metadata:
name: nginx-deployment
labels:
app: nginx
Spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
results (with one -v
as I believe it changes nothing):
ansible-playbook -i inventory /tmp/test.yaml -v Using /Users/sylvain/.ansible.cfg as config file
PLAY [other] *****
TASK [Gathering Facts] *** [DEPRECATION WARNING]: Distribution debian 9.6 on host other should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will default to using the discovered platform python for this host. See https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information. This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. ok: [other]
TASK [generate k8s yaml] ***** changed: [other] => changed=true checksum: 1f748fd324c582f828ce615845f5cd350d430607 dest: /tmp/deploy.yml gid: 1000 group: debian md5sum: 039a0a630dae61c7a8ba461bbd11a06d mode: '0644' owner: debian size: 340 src: /home/debian/.ansible/tmp/ansible-tmp-1605631301.779688-52159-127585612088991/source state: file uid: 1000
TASK [add deploy] **** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option fatal: [other]: FAILED! => changed=false msg: |- Could not find or access '/tmp/deploy.yml' on the Ansible Controller. If you are using a module and expect the file to exist on the remote, see the remote_src option
PLAY RECAP *** other : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
cat test.yaml
- hosts: other
tasks:
- name: generate k8s yaml
delegate_to: localhost
copy:
content: |
apiVersion: apps/v1
Kind: Deployment
Metadata:
name: nginx-deployment
labels:
app: nginx
Spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
dest: /tmp/deploy.yml
- name: add deploy
community.kubernetes.k8s:
state: present
src: /tmp/deploy.yml
ansible-playbook -i inventory /tmp/test.yaml -v
Using /Users/sylvain/.ansible.cfg as config file
PLAY [other] *************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************************************
[DEPRECATION WARNING]: Distribution debian 9.6 on host other should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will default to using the discovered
platform python for this host. See https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information. This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
ok: [other]
TASK [generate k8s yaml] *************************************************************************************************************************************************************************************************************************
changed: [other] => changed=true
checksum: 1f748fd324c582f828ce615845f5cd350d430607
dest: /tmp/deploy.yml
gid: 20
group: staff
md5sum: 039a0a630dae61c7a8ba461bbd11a06d
mode: '0644'
owner: sylvain
size: 340
src: /Users/sylvain/.ansible/tmp/ansible-tmp-1605631449.454893-53998-170118609993164/source
state: file
uid: 501
TASK [add deploy] ********************************************************************************************************************************************************************************************************************************
fatal: [other]: FAILED! => changed=false
msg: Error accessing /tmp/deploy.yml. Does the file exist?
PLAY RECAP ***************************************************************************************************************************************************************************************************************************************
other : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
so the behavior is the same than before
And thanks for the quick reply!
@sylvainOL Could you please try #320 and the following playbook?
---
- hosts: centos
tasks:
- name: generate k8s yaml
copy:
content: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
dest: /tmp/deploy.yml
- name: add deploy
community.kubernetes.k8s:
state: present
remote_src: True
src: /tmp/deploy.yml
Hi @Akasurde thanks for the proposal. I'm a bit of an idiot so I'm not sure how I can grab / test #320 version for k8s community kubernetes :S
Either you can copy all the files from this PR to the respective location in community.kubernetes or
# mkdir -p /tmp/collections/ansible-collections/community
# git clone https://github.com/Akasurde/community.kubernetes /tmp/collections/ansible-collections/community/kubernetes
# cd /tmp/collections/ansible-collections/community/kubernetes
# git checkout -b i320 -t origin/remote_src
# export ANSIBLE_COLLECTIONS_PATH=/tmp/collections
# ansible-playbook ...
👍 , I'll do it asap
this playbook worked with #320:
---
- hosts: other
gather_facts: False
tasks:
- name: generate k8s yaml
copy:
content: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
dest: /tmp/deploy.yml
- name: add deploy
community.kubernetes.k8s:
state: present
remote_src: True
src: /tmp/deploy.yml
so #320 fixes one of the two issues (actually the one that I wanted ;) ) but we still have an issue when trying to access on the controller.
Here are two examples and none is working:
both are launched this way:
ansible-playbook -vvvi inv /tmp/test2.yaml
---
- hosts: other
gather_facts: False
tasks:
- name: generate k8s yaml
copy:
content: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
dest: /tmp/files/deploy.yml
delegate_to: localhost
- name: add deploy
community.kubernetes.k8s:
state: present
src: /tmp/files/deploy.yml
---
- hosts: other
gather_facts: False
tasks:
- name: generate k8s yaml
copy:
content: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
dest: /tmp/files/deploy.yml
delegate_to: localhost
- name: add deploy
community.kubernetes.k8s:
state: present
src: deploy.yml
(almost) same results both times: msg:
Error accessing /tmp/files/deploy.yml. Does the file exist?
and:
$ ls -l /tmp/files/deploy.yml
-rw-r--r-- 1 sylvain staff 363 4 déc 13:41 /tmp/files/deploy.yml
@Akasurde , thanks for the PR, it makes (half of) the bug disappears!
@sylvainOL
---
- hosts: centos
tasks:
- name: generate k8s yaml
copy:
content: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
namespace: default
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
dest: /tmp/files/deploy.yml
delegate_to: localhost
- name: add deploy
community.kubernetes.k8s:
state: present
remote_src: True
src: /tmp/files/deploy.yml
TASK [generate k8s yaml] **********************************************************************
task path: /playbooks/k8s/remote_src/remote_src.yml:4
changed: [127.0.0.1 -> localhost] => {"changed": true, "checksum": "568c85f33846c13e8c85d8e2099b13667160f255", "dest": "/tmp/files/deploy.yml", "gid": 0, "group": "root", "md5sum": "f53962130c20869506f4f5ba0b8dcc30", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 363, "src": "/root/.ansible/tmp/ansible-tmp-1607330650.1133351-51630-30526254746699/source", "state": "file", "uid": 0}
redirecting (type: action) community.kubernetes.k8s to community.kubernetes.k8s_info
TASK [add deploy] *****************************************************************************
task path: /playbooks/k8s/remote_src/remote_src.yml:32
redirecting (type: action) community.kubernetes.k8s to community.kubernetes.k8s_info
redirecting (type: action) community.kubernetes.k8s to community.kubernetes.k8s_info
changed: [127.0.0.1] => {"changed": true, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"creationTimestamp": "2020-12-07T08:44:11Z", "generation": 1, "labels": {"app": "nginx"}, "name": "nginx-deployment-1", "namespace": "default", "resourceVersion": "6777460", "selfLink": "/apis/apps/v1/namespaces/default/deployments/nginx-deployment-1", "uid": "459ea0e0-3e97-47c7-887c-5b94ba457e88"}, "spec": {"progressDeadlineSeconds": 600, "replicas": 3, "revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "nginx"}}, "strategy": {"rollingUpdate": {"maxSurge": "25%", "maxUnavailable": "25%"}, "type": "RollingUpdate"}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "nginx"}}, "spec": {"containers": [{"image": "nginx:1.7.9", "imagePullPolicy": "IfNotPresent", "name": "nginx", "ports": [{"containerPort": 80, "protocol": "TCP"}], "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}}, "status": {}}}
src
. @sylvainOL you are not specifying remote_src: True
in community.kubernetes.k8s
task - https://github.com/ansible-collections/community.kubernetes/issues/307#issuecomment-738765081.
@Akasurde, well, as far as I understand the stuff, we have two options when using community.kubernetes.k8s
:
remote_src: True
: here the file has to be in the host where the playbook is played, not on ansible controller (other
in my example). With your patch (#320), it worksremote_src
(or remote_src: False
): here the file has to be in the ansible controller. I've tried with 2 options (the one I set in https://github.com/ansible-collections/community.kubernetes/issues/307#issuecomment-738765081) and none of them is working.Again, I believe here we have two issues:
And also, I find the default way counterintuitive as most tasks are executed by default on host and here it's the contrary
SUMMARY
Use of
src
options fork8s
module seems to work only if Ansible Controller and Host are the sameISSUE TYPE
COMPONENT NAME
k8s role version:
v1.1.1
ANSIBLE VERSION
CONFIGURATION
OS / ENVIRONMENT
had the issue on both MacOs and Debian
STEPS TO REPRODUCE
use two different machines:
other
in the playbook)first playbook I tried
second playbook tried (the same as before but generating the file on controller side):
EXPECTED RESULTS
I would have expected that one of the tries (preferrably first as it's the way it works with all modules) would have created the deployment.
ACTUAL RESULTS
FIRST TRY
we have an error (
ansible.errors.AnsibleFileNotFound: Could not find or access '/tmp/deploy.yml' on the Ansible Controller.
) saying the file should be on the Ansible controller.And we don't see actual command on
other
server:full playbook run:
SECOND TRY
As said by First run error, we put the file on the controller.
But we see here that the module tries to launch itself but the file is missing (as it's on the controller...)