CrowdStrike / caracara

Developer enhancements (DX) for FalconPy, the CrowdStrike Python SDK
MIT License
34 stars 11 forks source link

[ BUG ] Successful RTR connections when they fail #187

Open kevin-cooper-1 opened 1 month ago

kevin-cooper-1 commented 1 month ago

Bug Report Template

Describe the bug

I believe there is an issue in lines 181-186 of batch_session.py. Currently, the code just appends all the hosts that return from the worker thread when attempting a RTR connection. However, in the logs provided below, RTR connections fail when the host isn't online. Since the connections succeed and connection details aren't return, it isn't possible to verify whether follow up commands are actually capable of being run.

To Reproduce

Attempt to initiate an RTR connection to an endpoint that the web UI claims is "offline"

from caracara import Client
import logging

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.DEBUG)

client = Client(
    client_id = os.getenv("FALCON_CLIENT_ID"),
    client_secret = os.getenv("FALCON_CLIENT_SECRET"),
)
filters = client.FalconFilter()
filters.create_new_filter("OS", "Mac")
device_ids = client.hosts.get_device_ids(filters=filters)
batch_session = client.rtr.batch_session()
batch_session.connect(device_ids=device_ids, queueing=False)
for device_id, device_result in batch_session.run_generic_command('COMMAND', timeout=10).items():
    ...
batch_session.disconnect()

Logs

INFO:caracara.modules.rtr.batch_session:Establishing an RTR batch session with 3 systems
DEBUG:caracara.modules.rtr.batch_session:['X', 'Y', 'Z']
INFO:caracara.modules.rtr.batch_session:Divided up devices into 1 batches
INFO:caracara.modules.rtr.batch_session:ThreadPoolExecutor-4_0 | Batch worker started with a list of 3 devices
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.us-2.crowdstrike.com:443
DEBUG:urllib3.connectionpool:https://api.us-2.crowdstrike.com:443 "POST /real-time-response/combined/batch-init-session/v1?timeout=30&timeout_duration=30s HTTP/11" 404 463
INFO:caracara.modules.rtr.batch_session:ThreadPoolExecutor-4_0 | Connected to 3 systems
DEBUG:caracara.modules.rtr.batch_session:ThreadPoolExecutor-4_0 | {'meta': {'query_time': 29.500542149, 'powered_by': 'empower-api', 'trace_id': 'A'}, 'batch_id': '', 'resources': {'Z': {'session_id': '', 'complete': False, 'stdout': '', 'stderr': '', 'aid': 'Z', 'errors': [{'code': 50401, 'message': 'Exceeded maximum connect timeout: 29.50s'}], 'query_time': 0, 'offline_queued': False}, 'Y': {'session_id': '', 'complete': False, 'stdout': '', 'stderr': '', 'aid': 'Y', 'errors': [{'code': 40401, 'message': 'Could not establish sensor comms'}], 'query_time': 0, 'offline_queued': False}, 'X': {'session_id': '', 'complete': False, 'stdout': '', 'stderr': '', 'aid': 'X', 'errors': [{'code': 40401, 'message': 'Could not establish sensor comms'}], 'query_time': 0, 'offline_queued': False}}, 'errors': [{'code': 404, 'message': 'no successful hosts initialized on RTR'}]}
INFO:caracara.modules.rtr.batch_session:Completed a batch of RTR connections
INFO:caracara.modules.rtr.batch_session:Connected to 3 devices
DEBUG:caracara.modules.rtr.batch_session:[<caracara.modules.rtr.batch_session.InnerRTRBatchSession object at 0x0000>]
DEBUG:caracara.modules.rtr.batch_session:RTR session  has 599s remaining
INFO:caracara.modules.rtr.batch_session:Executing a command via RTR: COMMAND

Expected behavior

The RTR api should indicate/return the endpoints that were successfully connected to. Currently it claims that all three completed successfully, so attempts to execute RTR scripts/commands fail without any visible reason (when debug logging isn't on).

Environment

Operating System Version

Windows 11
Version 23H2 (OS Build 22631.3880)

Python Version

Python 3.10.6

Poetry Version

N/A

Python Package Versions

caracara==0.7.0
caracara-filters==0.2.0
crowdstrike-falconpy==1.4.4
falcon-toolkit==3.4.2

Additional context

ChristopherHammond13 commented 1 month ago

Very interesting! We will look more into this when I am back from PTO.

Thank you so much for raising this bug report, and I really appreciate all the detail you have included to help us figure this one out!

ChristopherHammond13 commented 2 weeks ago

I may have a neat fix for this. Going to test it internally, then get a PR up and merged in if the fix succeeds. Thank you again for your detailed bug report!

kevin-cooper-1 commented 2 weeks ago

Thank you!