Closed TinLe closed 7 months ago
@TinLe Thanks for opening up an issue - The issue likely arises from the fact that ansible_connection
is set to aws_ssm
, which overrides the default behavior of connecting to hosts directly. This can interfere with tasks in the crowdstrike.falcon.falcon_install
role that utilize delegate_to: localhost
.
When ansible_connection
is set to aws_ssm
, Ansible attempts to execute tasks on the target hosts using AWS Systems Manager (SSM) Session Manager, which involves establishing a connection to localhost to initiate the session. However, tasks that delegate actions to localhost may not function correctly in this context because the SSM session is intended for remote hosts, not the control node where Ansible is executing.
Here are some suggestions:
localhost ansible_connection=local
I would not use become: true
since the role already handles that for you. You will run into a situation where localhost will try to run sudo and it will most likely fail.
---
- name: Install CrowdStrike Falcon sensors
hosts: "crowdstrike_{{ env }}"
vars:
ansible_connection: 'aws_ssm'
ansible_aws_ssm_region: "{{ ssm_region }}"
ansible_aws_ssm_bucket_name: "s3-ansible-ssm-bucket-{{ env }}"
ansible_aws_ssm_profile: "{{ env }}"
falcon_client_id: "{{ falcon_client_id }}"
falcon_client_secret: "{{ falcon_client_secret }}"
falcon_cloud: "{{ falcon_cloud }}"
serial: 1
# gather_facts: false
roles:
- role: crowdstrike.falcon.falcon_install
vars:
falcon_api_enable_no_log: false
tags: falcon_install
@carlosmmatos ah yes, it was ansible_connection: aws_ssm
that overrides delegate_to: localhost
.
I ended up having to add something like this to the end of my inventory.ini
[all:vars]
ansible_connection=aws_ssm
I need to use import_role as I must run other tasks before calling falcon_install, e.g. stop and disable other agents/sensors. Gather facts hangs if become
is not set to true.
Current playbook.
---
- name: Install CrowdStrike Falcon sensors
hosts: "crowdstrike_{{ env }}"
vars:
ansible_aws_ssm_region: "{{ ssm_region }}"
ansible_aws_ssm_bucket_name: "s3-ansible-ssm-bucket-{{ env }}"
ansible_aws_ssm_profile: "{{ env }}"
become: true
serial: 1
gather_facts: true
tasks:
- name: stop carbon black agent
ansible.builtin.systemd_service:
name: cbagentd
state: stopped
enabled: false
- import_role:
name: crowdstrike.falcon.falcon_install
vars:
falcon_client_id: "{{ falcon_client_id }}"
falcon_client_secret: "{{ falcon_client_secret }}"
falcon_cloud: "{{ falcon_cloud }}"
falcon_api_enable_no_log: false
falcon_api_sensor_download_path: /tmp
tags: falcon_install
become: false
LOL. I've been using the wrong API key. Nevermind. It's all working now.
Glad it's working for you! If you run into anything else feel free to open up another issue!
Hi @carlosmmatos i'm facing the same issue with the tasks that have delegate_to: localhost
and i tried the suggested fixes from previous responses but with no luck as ansible would still try to connect to localhost over SSM
@ls-omar-ajamieh Can you share your playbook / inventory?
@carlosmmatos this is the playbook.yaml
---
- hosts: all
gather_facts: true
ignore_unreachable: true
become: false
vars:
ansible_aws_ssm_bucket_name: "xxxxxxx"
ansible_aws_ssm_region: "eu-west-1"
ansible_connection: aws_ssm
secrets: "{{ lookup('amazon.aws.aws_secret',lookup('env', 'ENVIRONMENT') + '/crowdstrike', region=lookup('env', 'AWS_REGION')) }}"
falcon_cid: "{{ secrets.falcon_cid }}"
falcon_provisioning_token: "{{ secrets.falcon_provisioning_token }}"
falcon_client_id: "{{ secrets.falcon_client_id }}"
falcon_client_secret: "{{ secrets.falcon_client_secret }}"
remove_falcon: "{{ secrets.remove_falcon }}"
retry_enabled: "{{ secrets.retry_enabled }}"
falcon_tags: "{{ secrets.falcon_tags }}"
falcon_api_enable_no_log: false
pre_tasks:
- name: Gather service facts
service_facts:
roles:
- role: crowdstrike.falcon.falcon_install
when:
- remove_falcon == "no"
- "'falcon-sensor.service' not in ansible_facts.services"
- role: crowdstrike.falcon.falcon_configure
when:
- remove_falcon == "no"
- "'falcon-sensor.service' not in ansible_facts.services"
- role: crowdstrike.falcon.falcon_uninstall
when: remove_falcon == "yes"
post_tasks:
- name: Reload systemd daemon after Falcon Sensor uninstallation
command: systemctl daemon-reload
when: remove_falcon == "yes"
ansible.cfg
[defaults]
remote_tmp = /tmp/.ansible/tmp
enable_plugins = aws_ec2, aws_ssm
force_color=True
inventory = aws_ec2.yaml
[connection]
localhost ansible_connection = local
aws_ec2.yaml
---
plugin: aws_ec2
regions:
- eu-west-1
hostnames:
- instance-id
filters:
tag:Name:
- "xxxxxxxx"
instance-state-name: running
all:
hosts:
localhost:
ansible_connection: local
i also tried several things but all of the options i tried had the same outcome
@ls-omar-ajamieh - I think the issue you have is that your are setting the ansible_connection at the playbook level. What if you tried to do this from an inventory level - for example:
Create a directory to house your inventory files (inventory/)
inventory/aws_ec2.yaml
:
---
plugin: aws_ec2
regions:
- eu-west-1
hostnames:
- instance-id
filters:
tag:Name:
- "xxxxxxxx"
instance-state-name: running
inventory/static.yml
:
localhost ansible_connection=localhost
[aws_ec2:vars]
ansible_connection=aws_ssm
Now when you call your playbook, you can specify the directory as your inventory. To test you can do:
ansible-inventory -i inventory --list
Here is what your playbook could look like:
playbook.yml
:
---
- hosts: all
gather_facts: true
ignore_unreachable: true
# become: false ** I DONT KNOW IF THIS MESSES ANYTHING UP **
vars:
ansible_aws_ssm_bucket_name: "xxxxxxx"
ansible_aws_ssm_region: "eu-west-1"
# ansible_connection: aws_ssm
secrets: "{{ lookup('amazon.aws.aws_secret',lookup('env', 'ENVIRONMENT') + '/crowdstrike', region=lookup('env', 'AWS_REGION')) }}"
falcon_cid: "{{ secrets.falcon_cid }}"
falcon_provisioning_token: "{{ secrets.falcon_provisioning_token }}"
falcon_client_id: "{{ secrets.falcon_client_id }}"
falcon_client_secret: "{{ secrets.falcon_client_secret }}"
remove_falcon: "{{ secrets.remove_falcon }}"
retry_enabled: "{{ secrets.retry_enabled }}"
falcon_tags: "{{ secrets.falcon_tags }}"
falcon_api_enable_no_log: false
pre_tasks:
- name: Gather service facts
service_facts:
roles:
- role: crowdstrike.falcon.falcon_install
when:
- remove_falcon == "no"
- "'falcon-sensor.service' not in ansible_facts.services"
- role: crowdstrike.falcon.falcon_configure
when:
- remove_falcon == "no"
- "'falcon-sensor.service' not in ansible_facts.services"
- role: crowdstrike.falcon.falcon_uninstall
when: remove_falcon == "yes"
post_tasks:
- name: Reload systemd daemon after Falcon Sensor uninstallation
command: systemctl daemon-reload
when: remove_falcon == "yes"
As you can see I commented out become: false
since our roles already handle become for you. But if this is working for you then disregard.
And since now we are specifying the ansible_connection: aws_ssm
as a group var to the AWS ec2 instances, this should theoretically not impact localhost.
Thank you @carlosmmatos the suggested solution fixed the issue 🙏
falcon_install tries to d/l package from API server to localhost, then copy that to the remote host for installation.
The bug is that running it on a local Macbook, installing to AWS EC2, the modules tries to use aws_ssm to localhost, which does not work.
collection version:
The ansible playbook I am using: