Closed olwins closed 3 months ago
Edit : It work if I redefine all project variable at the job level
may be something is lost during the retry ?
Job hung
"configuration" : { "ansible-base-dir-path" : "/opt/ansible", "ansible-become" : "true", "ansible-become-method" : "sudo", "ansible-become-password-storage-path" : "keys/project/TEST_PATCHING_LINUX/password-itansible-root", "ansible-playbook" : "run_patching.yml", "ansible-ssh-passphrase-option" : "option.password", "ansible-ssh-use-agent" : "false" },
Job succeeded (basically I set manually the same value that the one define at the project level):
"configuration" : { "ansible-base-dir-path" : "/opt/ansible", "ansible-become" : "true", "ansible-become-method" : "sudo", "ansible-become-password-storage-path" : "keys/project/TEST_PATCHING_LINUX/password-itansible-root", "ansible-playbook" : "run_patching.yml", "ansible-ssh-auth-type" : "privateKey", "ansible-ssh-keypath" : "/var/lib/rundeck/.ssh/id_ed25519", "ansible-ssh-passphrase-option" : "option.password", "ansible-ssh-passphrase-storage-path" : "keys/project/TEST_PATCHING_LINUX/Pass_itmasteransible", "ansible-ssh-use-agent" : "true", "ansible-ssh-user" : "itansible" },
root cause found
I thought that by default the ansible-ssh-use-agent value would be set to the one defined at the project level (true in my case) But when I create a new job, it is automatically set to false
"ansible-ssh-use-agent" : "false"
Set those value for the job are enough 👍
"ansible-base-dir-path" : "/opt/ansible", "ansible-become" : "true", "ansible-become-password-storage-path" : "keys/project/TEST_PATCHING_LINUX/password-itansible-root", "ansible-playbook" : "test_patching.yaml", "ansible-ssh-passphrase-option" : "option.password", "ansible-ssh-use-agent" : "true"
Seem fixed in the latest version
Hi
I have a playbook that patch a remote server, it work without issue when started manually using ansible-playbook.
But when running with rundesk on the same server , the playbook hang in the reboot task each time
Ansible playbook (this task is enough to reproduce the problem)
It look like, it not able to properly reconnect after the reboot
It retry every 30/40 sec, always with the same error after a while, there is only one additional line, the socket seems to be removed also : o connection to reset: Control socket connect(/var/lib/rundeck/.ansible/cp/f4829f47ff): No such file or directory
in rundeck, ansible is configured to use a ssh key + passphrase (in the vault), and a root password also in the vault
I try to modify a few ssh settings, but it didn't change anything