Open zosmf-Robyn opened 2 years ago
Hey @zosmf-Robyn this exact flow isn't going to work anymore. Those 2 jobs are running in two different containers that get cleaned up after each run. You have options though. You can use some sort of shared persistent storage that you map into the container at runtime. Or maybe you can push the file to some 3rd party site (i.e. artifactory or a github gist even).
@chrismeyersfsu Thanks for reply! Could you please help to explain more about how to setup/use shared persistent storage and map into container? I'm new for awx-operator installation and not sure if I have the right setup, below is my yaml file:
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx-instance1
spec:
service_type: nodeport
ingress_type: Route
postgres_storage_class: nfs-client
extra_volumes: |
- name: shared-volume
persistentVolumeClaim:
claimName: awx-instance1-projects-claim
init_container_extra_volume_mounts: |
- name: shared-volume
mountPath: /shared
init_container_extra_commands: |
chmod 775 /shared
chgrp 1000 /shared
ee_extra_volume_mounts: |
- name: shared-volume
mountPath: /shared
task_extra_volume_mounts: |
- name: shared-volume
mountPath: /shared
web_extra_volume_mounts: |
- name: shared-volume
mountPath: /shared
@zosmf-Robyn try asking on the mailing list. They have more experience than I in actually running Tower :) Especially in kubernetes.
If the data being passed between plays is large, consider leveraging a data store. Mounting files really is not a scalable solution.
Also interested in how this is solved. especially if the volumes are in separate availability zones than the pods.
Personally, I would push any artifacts from one playbook up to s3, an ftp server, a gist or some cloud data store API. Then I would use set_stats
to pass a url or identifier to the next job template in the workflow.
Anybody find a solution?
I want to have a workflow job template which links 2 job templates. Each job template launch a playbook: playbook1 is used to fetch a file from remote host and save it to local path, such as "/tmp/remote_file.txt". playbook2 is used to read the fetched file and do something else.
I have installed AWX v14 on docker and deployed such workflow job template, which always works well. The fetched file was saved on awx_task container.
Now I install AWX v19 on OCP via awx-operator, and deployed such workflow job template. Since AWX v19 runs each job in a separate container, playbook1 is successful (which means the fetched file is successfully saved to "/tmp"), but playbook2 is failed with error: "FileNotFoundError: [Errno 2] No such file or directory: '/tmp/remote_file.txt' "
Using "set_stats" to pass result between playbooks is a workaround, but it is not applicable if the data to be passed to playbook2 is huge. For example, playbook2 is required to process thousands of files which is fetched by playbook1. Is it possible to have a shared volume for all containers of each AWX template job, so that these containers can access and share some specific file path?