CrowdStrike / ansible_collection_falcon

Comprehensive toolkit for streamlining your interactions with the CrowdStrike Falcon platform.
https://galaxy.ansible.com/ui/repo/published/crowdstrike/falcon/
GNU General Public License v3.0
97 stars 60 forks source link

falcon_install tries to connect to localhost via aws_ssm #469

Closed TinLe closed 7 months ago

TinLe commented 7 months ago

falcon_install tries to d/l package from API server to localhost, then copy that to the remote host for installation.

The bug is that running it on a local Macbook, installing to AWS EC2, the modules tries to use aws_ssm to localhost, which does not work.

collection version:

$ ansible-galaxy collection list|grep falcon
crowdstrike.falcon                       4.2.2

$ python --version
Python 3.11.6 (main, Nov 15 2023, 10:45:14) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin

$ ansible --version
ansible [core 2.16.4]
  config file = /Users/tle/src/obs/devops/vuln-management/ansible.cfg
  configured module search path = ['/Users/tle/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/tle/src/obs/devops/vuln-management/venv/lib/python3.11/site-packages/ansible
  ansible collection location = /Users/tle/.ansible/collections:/usr/share/ansible/collections
  executable location = /Users/tle/src/obs/devops/vuln-management/venv/bin/ansible
  python version = 3.11.6 (main, Nov 15 2023, 10:45:14) [Clang 15.0.0 (clang-1500.0.40.1)] (/Users/tle/src/obs/devops/vuln-management/venv/bin/python3)
  jinja version = 3.1.3
  libyaml = True

The ansible playbook I am using:

---
- name: Install CrowdStrike Falcon sensors
  hosts: "crowdstrike_{{ env }}"
  vars:
    ansible_connection: 'aws_ssm'
    ansible_aws_ssm_region: "{{ ssm_region }}"
    ansible_aws_ssm_bucket_name: "s3-ansible-ssm-bucket-{{ env }}"
    ansible_aws_ssm_profile: "{{ env }}"
    falcon_client_id: "{{ falcon_client_id }}"
    falcon_client_secret: "{{ falcon_client_secret }}"
    falcon_cloud: "{{ falcon_cloud }}"
  become: true
  serial: 1
  # gather_facts: false
  tasks:
    - import_role:
        name: crowdstrike.falcon.falcon_install
      vars:
        falcon_api_enable_no_log: false
      tags: falcon_install
TASK [crowdstrike.falcon.falcon_install : ansible.builtin.include_tasks] *****************************************************************************************
task path: /Users/tle/.ansible/collections/ansible_collections/crowdstrike/falcon/roles/falcon_install/tasks/main.yml:12
included: /Users/tle/.ansible/collections/ansible_collections/crowdstrike/falcon/roles/falcon_install/tasks/auth.yml for i-0a6e449328c5b59d8

TASK [crowdstrike.falcon.falcon_install : CrowdStrike Falcon | Authenticate to CrowdStrike API] ******************************************************************
task path: /Users/tle/.ansible/collections/ansible_collections/crowdstrike/falcon/roles/falcon_install/tasks/auth.yml:2
redirecting (type: connection) ansible.builtin.aws_ssm to community.aws.aws_ssm
<localhost> ESTABLISH SSM CONNECTION TO: localhost
<localhost> ssm_retry: attempt: 0, caught exception(An error occurred (TargetNotConnected) when calling the StartSession operation: localhost is not connected.) from cmd (echo ~...), pausing for 0 seconds
<localhost> ESTABLISH SSM CONNECTION TO: localhost
<localhost> ssm_retry: attempt: 1, caught exception(An error occurred (TargetNotConnected) when calling the StartSession operation: localhost is not connected.) from cmd (echo ~...), pausing for 1 seconds
<localhost> ESTABLISH SSM CONNECTION TO: localhost
<localhost> ssm_retry: attempt: 2, caught exception(An error occurred (TargetNotConnected) when calling the StartSession operation: localhost is not connected.) from cmd (echo ~...), pausing for 3 seconds
<localhost> ESTABLISH SSM CONNECTION TO: localhost
The full traceback is:
Traceback (most recent call last):
  File "/Users/tle/src/obs/devops/vuln-management/venv/lib/python3.11/site-packages/ansible/executor/task_executor.py", line 165, in run
    res = self._execute()
          ^^^^^^^^^^^^^^^
  File "/Users/tle/src/obs/devops/vuln-management/venv/lib/python3.11/site-packages/ansible/executor/task_executor.py", line 637, in _execute
    result = self._handler.run(task_vars=vars_copy)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tle/src/obs/devops/vuln-management/venv/lib/python3.11/site-packages/ansible/plugins/action/normal.py", line 39, in run
    result = merge_hash(result, self._execute_module(task_vars=task_vars, wrap_async=wrap_async))
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tle/src/obs/devops/vuln-management/venv/lib/python3.11/site-packages/ansible/plugins/action/__init__.py", line 1023, in _execute_module
    self._make_tmp_path()
  File "/Users/tle/src/obs/devops/vuln-management/venv/lib/python3.11/site-packages/ansible/plugins/action/__init__.py", line 473, in _make_tmp_path
    tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tle/src/obs/devops/vuln-management/venv/lib/python3.11/site-packages/ansible/plugins/action/__init__.py", line 906, in _remote_expand_user
    data = self._low_level_execute_command(cmd, sudoable=False)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tle/src/obs/devops/vuln-management/venv/lib/python3.11/site-packages/ansible/plugins/action/__init__.py", line 1315, in _low_level_execute_command
    rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tle/.ansible/collections/ansible_collections/community/aws/plugins/connection/aws_ssm.py", line 314, in wrapped
    return_tuple = func(self, *args, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tle/.ansible/collections/ansible_collections/community/aws/plugins/connection/aws_ssm.py", line 552, in exec_command
    super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
  File "/Users/tle/src/obs/devops/vuln-management/venv/lib/python3.11/site-packages/ansible/plugins/connection/__init__.py", line 45, in wrapped
    self._connect()
  File "/Users/tle/.ansible/collections/ansible_collections/community/aws/plugins/connection/aws_ssm.py", line 478, in _connect
    self.start_session()
  File "/Users/tle/.ansible/collections/ansible_collections/community/aws/plugins/connection/aws_ssm.py", line 508, in start_session
    response = self._client.start_session(**start_session_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tle/src/obs/devops/vuln-management/venv/lib/python3.11/site-packages/botocore/client.py", line 553, in _api_call
    return self._make_api_call(operation_name, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tle/src/obs/devops/vuln-management/venv/lib/python3.11/site-packages/botocore/client.py", line 1009, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.TargetNotConnected: An error occurred (TargetNotConnected) when calling the StartSession operation: localhost is not connected.
fatal: [i-0a6e449328c5b59d8 -> localhost]: FAILED! => {
    "msg": "Unexpected failure during module execution: An error occurred (TargetNotConnected) when calling the StartSession operation: localhost is not connected.",
    "stdout": ""
}

NO MORE HOSTS LEFT ***********************************************************************************************************************************************
carlosmmatos commented 7 months ago

@TinLe Thanks for opening up an issue - The issue likely arises from the fact that ansible_connection is set to aws_ssm, which overrides the default behavior of connecting to hosts directly. This can interfere with tasks in the crowdstrike.falcon.falcon_install role that utilize delegate_to: localhost.

When ansible_connection is set to aws_ssm, Ansible attempts to execute tasks on the target hosts using AWS Systems Manager (SSM) Session Manager, which involves establishing a connection to localhost to initiate the session. However, tasks that delegate actions to localhost may not function correctly in this context because the SSM session is intended for remote hosts, not the control node where Ansible is executing.

Here are some suggestions:

  1. Try explicitly specifying your localhost connection in your inventory file:
    localhost ansible_connection=local
  2. I would not use become: true since the role already handles that for you. You will run into a situation where localhost will try to run sudo and it will most likely fail.

    1. If you need to space out your roles then you can do so in one playbook to make it easier. Example:
      
      ---
      - name: Install CrowdStrike Falcon sensors
      hosts: "crowdstrike_{{ env }}"
      vars:
      ansible_connection: 'aws_ssm'
      ansible_aws_ssm_region: "{{ ssm_region }}"
      ansible_aws_ssm_bucket_name: "s3-ansible-ssm-bucket-{{ env }}"
      ansible_aws_ssm_profile: "{{ env }}"
      falcon_client_id: "{{ falcon_client_id }}"
      falcon_client_secret: "{{ falcon_client_secret }}"
      falcon_cloud: "{{ falcon_cloud }}"
      serial: 1
      # gather_facts: false
      roles:
      - role: crowdstrike.falcon.falcon_install
      vars:
        falcon_api_enable_no_log: false
      tags: falcon_install
    • name: Do other stuff to hosts hosts: "crowdstrike_{{ env }}" vars: ansible_connection: 'aws_ssm' ansible_aws_ssm_region: "{{ ssm_region }}" ansible_aws_ssm_bucket_name: "s3-ansible-ssm-bucket-{{ env }}" ansible_aws_ssm_profile: "{{ env }}" become: true tasks:
      • name: Say hi shell: echo HI ... ...
TinLe commented 7 months ago

@carlosmmatos ah yes, it was ansible_connection: aws_ssm that overrides delegate_to: localhost.

I ended up having to add something like this to the end of my inventory.ini

[all:vars]
ansible_connection=aws_ssm

I need to use import_role as I must run other tasks before calling falcon_install, e.g. stop and disable other agents/sensors. Gather facts hangs if become is not set to true.

Current playbook.

---
- name: Install CrowdStrike Falcon sensors
  hosts: "crowdstrike_{{ env }}"
  vars:
    ansible_aws_ssm_region: "{{ ssm_region }}"
    ansible_aws_ssm_bucket_name: "s3-ansible-ssm-bucket-{{ env }}"
    ansible_aws_ssm_profile: "{{ env }}"
  become: true
  serial: 1
  gather_facts: true
  tasks:
    - name: stop carbon black agent
      ansible.builtin.systemd_service:
        name: cbagentd
        state: stopped
        enabled: false

    - import_role:
        name: crowdstrike.falcon.falcon_install
      vars:
        falcon_client_id: "{{ falcon_client_id }}"
        falcon_client_secret: "{{ falcon_client_secret }}"
        falcon_cloud: "{{ falcon_cloud }}"
        falcon_api_enable_no_log: false
        falcon_api_sensor_download_path: /tmp
      tags: falcon_install
      become: false
TinLe commented 7 months ago

LOL. I've been using the wrong API key. Nevermind. It's all working now.

carlosmmatos commented 7 months ago

Glad it's working for you! If you run into anything else feel free to open up another issue!

ls-omar-ajamieh commented 1 week ago

Hi @carlosmmatos i'm facing the same issue with the tasks that have delegate_to: localhost and i tried the suggested fixes from previous responses but with no luck as ansible would still try to connect to localhost over SSM

carlosmmatos commented 1 week ago

@ls-omar-ajamieh Can you share your playbook / inventory?

ls-omar-ajamieh commented 1 week ago

@carlosmmatos this is the playbook.yaml

---
- hosts: all
  gather_facts: true
  ignore_unreachable: true
  become: false
  vars:
    ansible_aws_ssm_bucket_name: "xxxxxxx"
    ansible_aws_ssm_region: "eu-west-1"
    ansible_connection: aws_ssm
    secrets: "{{ lookup('amazon.aws.aws_secret',lookup('env', 'ENVIRONMENT') + '/crowdstrike', region=lookup('env', 'AWS_REGION')) }}"
    falcon_cid: "{{ secrets.falcon_cid }}"
    falcon_provisioning_token: "{{ secrets.falcon_provisioning_token }}"
    falcon_client_id: "{{ secrets.falcon_client_id }}"
    falcon_client_secret: "{{ secrets.falcon_client_secret }}"
    remove_falcon: "{{ secrets.remove_falcon }}"
    retry_enabled: "{{ secrets.retry_enabled }}"
    falcon_tags: "{{ secrets.falcon_tags }}"
    falcon_api_enable_no_log: false
  pre_tasks:
    - name: Gather service facts
      service_facts:
  roles:
    - role: crowdstrike.falcon.falcon_install
      when:
        - remove_falcon == "no"
        - "'falcon-sensor.service' not in ansible_facts.services"
    - role: crowdstrike.falcon.falcon_configure
      when:
        - remove_falcon == "no"
        - "'falcon-sensor.service' not in ansible_facts.services"
    - role: crowdstrike.falcon.falcon_uninstall
      when: remove_falcon == "yes"
  post_tasks:
    - name: Reload systemd daemon after Falcon Sensor uninstallation
      command: systemctl daemon-reload
      when: remove_falcon == "yes"

ansible.cfg

[defaults]
remote_tmp = /tmp/.ansible/tmp
enable_plugins = aws_ec2, aws_ssm
force_color=True
inventory = aws_ec2.yaml

[connection]
localhost ansible_connection = local

aws_ec2.yaml

---
plugin: aws_ec2
regions:
  - eu-west-1
hostnames:
  - instance-id
filters:
  tag:Name:
    - "xxxxxxxx"
  instance-state-name: running

all:
  hosts:
    localhost:
      ansible_connection: local

i also tried several things but all of the options i tried had the same outcome

carlosmmatos commented 1 week ago

@ls-omar-ajamieh - I think the issue you have is that your are setting the ansible_connection at the playbook level. What if you tried to do this from an inventory level - for example:

Create a directory to house your inventory files (inventory/)

inventory/aws_ec2.yaml:

---
plugin: aws_ec2
regions:
  - eu-west-1
hostnames:
  - instance-id
filters:
  tag:Name:
    - "xxxxxxxx"
  instance-state-name: running

inventory/static.yml:

localhost ansible_connection=localhost

[aws_ec2:vars]
ansible_connection=aws_ssm

Now when you call your playbook, you can specify the directory as your inventory. To test you can do:

ansible-inventory -i inventory --list

Here is what your playbook could look like: playbook.yml:

---
- hosts: all
  gather_facts: true
  ignore_unreachable: true
  # become: false ** I DONT KNOW IF THIS MESSES ANYTHING UP **
  vars:
    ansible_aws_ssm_bucket_name: "xxxxxxx"
    ansible_aws_ssm_region: "eu-west-1"
    # ansible_connection: aws_ssm
    secrets: "{{ lookup('amazon.aws.aws_secret',lookup('env', 'ENVIRONMENT') + '/crowdstrike', region=lookup('env', 'AWS_REGION')) }}"
    falcon_cid: "{{ secrets.falcon_cid }}"
    falcon_provisioning_token: "{{ secrets.falcon_provisioning_token }}"
    falcon_client_id: "{{ secrets.falcon_client_id }}"
    falcon_client_secret: "{{ secrets.falcon_client_secret }}"
    remove_falcon: "{{ secrets.remove_falcon }}"
    retry_enabled: "{{ secrets.retry_enabled }}"
    falcon_tags: "{{ secrets.falcon_tags }}"
    falcon_api_enable_no_log: false
  pre_tasks:
    - name: Gather service facts
      service_facts:
  roles:
    - role: crowdstrike.falcon.falcon_install
      when:
        - remove_falcon == "no"
        - "'falcon-sensor.service' not in ansible_facts.services"
    - role: crowdstrike.falcon.falcon_configure
      when:
        - remove_falcon == "no"
        - "'falcon-sensor.service' not in ansible_facts.services"
    - role: crowdstrike.falcon.falcon_uninstall
      when: remove_falcon == "yes"
  post_tasks:
    - name: Reload systemd daemon after Falcon Sensor uninstallation
      command: systemctl daemon-reload
      when: remove_falcon == "yes"

As you can see I commented out become: false since our roles already handle become for you. But if this is working for you then disregard.

And since now we are specifying the ansible_connection: aws_ssm as a group var to the AWS ec2 instances, this should theoretically not impact localhost.

ls-omar-ajamieh commented 1 week ago

Thank you @carlosmmatos the suggested solution fixed the issue 🙏