Open gmarcy opened 3 years ago
@TomasTomecek any thoughts/suggestions? Not sure if this is a configuration issue, or simply beyond the capabilities of non-ssh connection modules.
Why not to run podman connection remotely as well?
Anyway, probably you need to use podman-remote
for that:
ansible_podman_executable: podman-remote
https://github.com/containers/ansible-podman-collections/blob/ecc02870df6be7e3841bb5d191938b5e5c5db587/plugins/connection/podman.py#L63-L70
At least it was designed for that. But podman-remote
didn't support remote cp
operation for a long time, then it changed a lot the way it works. So I doubt it will work now, probably need to figure out a new way to work with latest version of podman-remote
.
Thanks @sshnaidm for the response.
I tried changing ansible_podman_executable
to podman-remote
and the output was:
Using podman connection from collection
<remote_container> RUN [b'/usr/bin/podman-remote', b'mount', b'remote_container']
STDOUT b''
STDERR b"Error: unrecognized command `podman-remote mount`\nTry 'podman-remote --help' for more information.\n"
RC CODE 125
Failed to mount container remote_container: b"Error: unrecognized command `podman-remote mount`\nTry 'podman-remote --help' for more information."
<remote_container> RUN [b'/usr/bin/podman-remote', b'exec', b'--user', b'root', b'remote_container', b'/bin/sh', b'-c', b'echo ~root && sleep 0']
STDOUT b''
STDERR b'Error: cannot connect to the Podman socket, please verify that Podman REST API service is running: Get "http://d/v3.1.0-dev/libpod/_ping": dial unix ///run/user/1000/podman/podman.sock: connect: no such file or directory\n'
RC CODE 125
Any ideas on what to attempt next? I am not sure how the podman-remote would be able to determine where the remote_container was created since that was done by a containers.podman.podman_container ansible task. Are there some other podman commands I need to run to register that container with podman-remote?
After much additional trial and error I managed to get something to work. Still assessing how stable it is. One thing I did notice is that podman.service on the remote host is filling up with lots of conmon processes, several for each remotely executed command. They all appear to end with --exit-delay 300 so I'm guessing they will eventually go away, but would be nice if there was a way to be more proactive in cleaning them up.
Here is the latest version of the remotehost play in my playbook:
- hosts: remotehost
tags: remote
tasks:
- name: Ensure user specific systemd instance are persistent
command: |
loginctl enable-linger {{ ansible_user_id }}
register: systemd_instance_persist
changed_when: "systemd_instance_persist.rc == 0"
- name: Retrieve remote user runtime path
command: |
loginctl show-user {{ ansible_user_id }} -p RuntimePath --value
register: systemd_runtime_path
- name: Enable and start podman.socket
systemd:
name: podman.socket
enabled: yes
state: started
scope: user
- name: Start podman.service
systemd:
name: podman.service
state: started
scope: user
- name: create remote podman container
containers.podman.podman_container:
name: remote_container
image: registry.fedoraproject.org/fedora:33
command: sleep infinity
- name: Add remote system connection definition for remote_container
command: |
podman --remote system connection add remote_container --identity "{{ ansible_user_dir }}/.ssh/id_rsa" "ssh://{{ ansible_host }}{{ systemd_runtime_path.stdout }}/podman/podman.sock"
delegate_to: localhost
- name: add remote container to hosts
add_host:
hostname: remote_container
ansible_connection: containers.podman.podman
ansible_python_interpreter: /usr/bin/python3
ansible_podman_extra_args: --remote
- name: get container uname info
command: |
uname -a
delegate_to: remote_container
- name: run dnf to bring remote container up to date
dnf:
state: latest
delegate_to: remote_container
Any suggestions on how to simplify or otherwise wrangle that unwieldy process appreciated.
Also, since there doesn't appear to be a buildah --remote
option, is there any way to get a similar approach to work with the buildah connection plugin?
@sshnaidm @TomasTomecek any additional thoughts on being able to use buildah connector remotely? have a similar playbook with local and remote tags. the localhost version works great, but unable to get the remote equivalent to function.
- hosts: localhost
tags: local
tasks:
- name: create local buildah container
command: |
buildah from --name local_buildah registry.fedoraproject.org/fedora:33
- name: add local buildah container to hosts
add_host:
hostname: local_buildah
ansible_connection: containers.podman.buildah
ansible_python_interpreter: /usr/bin/python3
- name: run dnf to bring local container up to date
dnf:
state: latest
delegate_to: local_buildah
- name: create the entrypoint script
copy:
content: |
#!/bin/bash
set -eo pipefail
echo sleeping forever
sleep infinity
dest: /entrypoint.sh
mode: 0755
delegate_to: local_buildah
- name: set entrypoint
command: buildah config --entrypoint '["/entrypoint.sh"]' local_buildah
- name: commit local buildah container
command: buildah commit local_buildah local_image:latest
- hosts: remotehost
tags: remote
tasks:
- name: create remote buildah container
command: |
buildah from --name remote_buildah registry.fedoraproject.org/fedora:33
- name: add remote buildah container to hosts
add_host:
hostname: remote_buildah
ansible_connection: containers.podman.buildah
ansible_python_interpreter: /usr/bin/python3
- name: run dnf to bring remote container up to date
dnf:
state: latest
delegate_to: remote_buildah
- name: create the entrypoint script
copy:
content: |
#!/bin/bash
set -eo pipefail
echo sleeping forever
sleep infinity
dest: /entrypoint.sh
mode: 0755
delegate_to: remote_buildah
- name: set entrypoint
command: buildah config --entrypoint '["/entrypoint.sh"]' remote_buildah
- name: commit remote buildah container
command: buildah commit remote_buildah remote_image:latest
/kind bug
Description
I have a playbook to create a podman container on either a remote or a local machine. The podman_container task creates the container and runs it fine in either case. After I add the container to the hosts group and try to use the podman connection to reach it the local play works but the remote play fails.
Steps to reproduce the issue:
Describe the results you received:
fatal: [remotehost]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"
echo ~/.ansible/tmp
\"&& mkdir \"echo ~/.ansible/tmp/ansible-tmp-1618771081.7663367-103788-258354183528348
\" && echo ansible-tmp-1618771081.7663367-103788-258354183528348=\"echo ~/.ansible/tmp/ansible-tmp-1618771081.7663367-103788-258354183528348
\" ), exited with result 125", "unreachable": true}Describe the results you expected:
Run playbook using local tag and it works. Would like to be able to use the connector on remote machines and not just on the ansible install machine.
Additional information you deem important (e.g. issue happens only occasionally):
I tried several alternatives found with google searches, including several variations of adding
ansible_ssh_host: remotehost
but none were successful.Version of the
containers.podman
collection: Either git commit if installed from git:git show --summary
Or version fromansible-galaxy
if installed from galaxy:ansible-galaxy collection list | grep containers.podman
Output of
ansible --version
:Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Inventory file (e.g. content of
inv
):Playbook you run with ansible (e.g. content of
playbook.yaml
):Command line and output of ansible run with high verbosity
ansible-playbook -vvvvvvvv -i inv ./playbook.yml -t remote
Additional environment details (AWS, VirtualBox, physical, etc.):