Open advanceboy opened 3 months ago
Maybe I misunderstood but is the ee_extra_volume_mounts not what you are looking for?
EE means execution environment which is the container that runs your jobs
@YaronL16 Thank you for your response!
I appreciate the suggestion; however, I believe my issue might not be resolved by using ee_extra_volume_mounts
.
The problem is that even with this setting, I am unable to mount volumes correctly in the execution environment container.
The settings of task_extra_volume_mounts
and ee_extra_volume_mounts
affect the mount of the <resourcename>-task
and <resourcename>-ee
containers in the <resourcename>-task-<random>
pod, respectively.
The <resourcename>-ee
container uses the quay.io/ansible/awx-ee
image, but only acts as a "receptor" and does not execute Playbooks.
$ kubectl -n awx describe pod awx-demo-task-6898d4ddbc-l82ts
Name: awx-demo-task-6898d4ddbc-l82ts
Namespace: awx
Priority: 0
Service Account: awx-demo
...
Containers:
redis:
Image: docker.io/redis:7
...
awx-demo-task:
Image: quay.io/ansible/awx:24.6.1
...
Mounts:
...
/etc/tower/settings.py from awx-demo-settings (ro,path="settings.py")
/path/to/volume-mount/ from volume-mount-hostpath-volume (ro)
/var/lib/awx/projects from awx-demo-projects (rw)
...
awx-demo-ee:
Image: quay.io/ansible/awx-ee:24.6.1
...
Args:
/bin/sh
-c
if [ ! -f /etc/receptor/receptor.conf ]; then
cp /etc/receptor/receptor-default.conf /etc/receptor/receptor.conf
sed -i "s/HOSTNAME/$HOSTNAME/g" /etc/receptor/receptor.conf
fi
exec receptor --config /etc/receptor/receptor.conf
...
Mounts:
...
/etc/receptor/work_private_key.pem from awx-demo-receptor-work-signing (ro,path="work-private-key.pem")
/path/to/volume-mount/ from volume-mount-hostpath-volume (ro)
/var/lib/awx/projects from awx-demo-projects (rw)
...
awx-demo-rsyslog:
Image: quay.io/ansible/awx:24.6.1
...
Volumes:
...
volume-mount-hostpath-volume:
Type: HostPath (bare host directory volume)
Path: /path/to/volume-mount/
HostPathType: Directory
...
EE means execution environment which is the container that runs your jobs
As far as I have tried, when running Playbooks on AWX v24, it appears that the pod automation-job-<job_id>-<random>
is created and the worker container is used.
In the above settings, we can see that this worker container does not mount anything.
For example, running the following playbook below:
- hosts: all
tasks:
- local_action:
module: ansible.builtin.shell
cmd: printenv
- local_action:
module: ansible.builtin.shell
cmd: ls -la /
- name: sleep 60
ansible.builtin.command: sleep 60
Then, you'll know the name of the pod that's running.
changed: true
stdout: >-
...
HOSTNAME=automation-job-3-568rw
HOME=/runner
JOB_ID=3
...
stderr: ''
rc: 0
cmd: printenv
Also, you can see that /path/to/volume-mount/
is not mounted.
changed: true
stdout: |-
total 72
drwxr-xr-x 1 root root 4096 Oct 31 06:45 .
drwxr-xr-x 1 root root 4096 Oct 31 06:45 ..
dr-xr-xr-x 2 root root 4096 Jun 25 14:23 afs
lrwxrwxrwx 1 root root 7 Jun 25 14:23 bin -> usr/bin
dr-xr-xr-x 2 root root 4096 Jun 25 14:23 boot
drwxr-xr-x 5 root root 360 Oct 31 06:45 dev
drwxr-xr-x 1 root root 4096 Oct 31 00:25 etc
drwxr-xr-x 2 root root 4096 Jun 25 14:23 home
lrwxrwxrwx 1 root root 7 Jun 25 14:23 lib -> usr/lib
lrwxrwxrwx 1 root root 9 Jun 25 14:23 lib64 -> usr/lib64
drwx------ 2 root root 4096 Oct 28 04:09 lost+found
drwxr-xr-x 2 root root 4096 Jun 25 14:23 media
drwxr-xr-x 2 root root 4096 Jun 25 14:23 mnt
drwxr-xr-x 1 root root 4096 Oct 31 00:19 opt
dr-xr-xr-x 315 root root 0 Oct 31 06:45 proc
dr-xr-x--- 1 root root 4096 Oct 31 00:24 root
drwxr-xr-x 1 root root 4096 Oct 31 00:26 run
drwxrwxr-x 1 root root 4096 Oct 31 06:45 runner
lrwxrwxrwx 1 root root 8 Jun 25 14:23 sbin -> usr/sbin
drwxr-xr-x 2 root root 4096 Jun 25 14:23 srv
dr-xr-xr-x 13 root root 0 Oct 31 06:45 sys
drwxrwxrwt 1 root root 4096 Oct 31 06:45 tmp
drwxr-xr-x 1 root root 4096 Oct 28 04:09 usr
drwxr-xr-x 1 root root 4096 Oct 28 04:09 var
stderr: ''
rc: 0
cmd: ls -la /
With the playbook above running, even if you look at the state of the corresponding pod on K8s, you can see that the container running the job has not mounted anything.
$ kubectl -n awx get pod
NAME READY STATUS RESTARTS AGE
automation-job-3-568rw 1/1 Running 0 9s
awx-demo-migration-24.6.1-w6m5g 0/1 Completed 0 111m
awx-demo-postgres-15-0 1/1 Running 0 119m
awx-demo-task-6898d4ddbc-l82ts 4/4 Running 0 119m
awx-demo-web-f9d668b4c-l77sv 3/3 Running 0 119m
awx-operator-controller-manager-bb695bf5d-6lhls 2/2 Running 0 120m
$ kubectl -n awx describe pod automation-job-3-568rw
Name: automation-job-3-568rw
Namespace: awx
Priority: 0
Service Account: default
...
Containers:
worker:
Image: quay.io/ansible/awx-ee:latest
...
Args:
ansible-runner
worker
--private-data-dir=/runner
...
Mounts: <none>
...
Volumes: <none>
Please confirm the following
Feature Summary
AWX creates a
automation-job-<job_id>-<random>
pod and executes the playbook within the worker container.Using the method described in Custom Volume and Volume Mount Options, volumes can be mounted to the Web or Control Plane containers.
However, there is no way to mount a volume to the job's worker container.