CiscoSecurity / fp-05-microsoft-sentinel-connector

Firepower Connector for Microsoft Sentinel
8 stars 1 forks source link

Script failing RHEL8.3 Python38 #1

Open jonesy5090 opened 3 years ago

jonesy5090 commented 3 years ago

Hi - we're currently running the Sentinel Connector script on RHEL 8.3, ingesting logs from an FMC. The client runs for some time and then fails with the trace below. Once failed we cannot start the service again.

Grateful for any help you can offer.

Process Process-1: Traceback (most recent call last): File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/baseproc.py", line 111, in _start callback() File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/receiver.py", line 159, in next self._parseMessageBundle( message ) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/receiver.py", line 111, in _parseMessageBundle self._send( message ) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/receiver.py", line 143, in _send self.callback( message ) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/pipeline.py", line 475, in onEvent parseDecorateTransformWrite( message, self.settings ) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/pipeline.py", line 256, in parseDecorateTransformWrite event = transform( event, settings ) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/pipeline.py", line 205, in transform output = adapters[ index ].dumps( event['record'] ) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/adapters/cef.py", line 822, in dumps return cefAdapter.dumps() File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/adapters/cef.py", line 812, in dumps self.convert() File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/adapters/cef.py", line 737, in convert self.output[target] = function( self.record ) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/adapters/cef.py", line 147, in 'cs1': lambda rec: packetData( rec['packetData'] ) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/adapters/cef.py", line 115, in packetData payload = packet.getPayloadAsAscii() File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/common/packet.py", line 95, in getPayloadAsAscii asciiPayload = self.getPayloadAsBytes().decode( 'ascii', 'ignore' ) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/common/packet.py", line 85, in getPayloadAsBytes self.getLayer3HeaderLength() + File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/common/packet.py", line 55, in getLayer3HeaderLength self.getNyble( ipOffsetNyble ) * File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/common/packet.py", line 41, in getNyble byte = struct.unpack( '>B', self.data[byteIndex] )[0] TypeError: a bytes-like object is required, not 'int'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib64/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib64/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/pipeline.py", line 467, in init super( SingleWorker, self ).init( File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/pipeline.py", line 280, in init super( Subscriber, self ).init( File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/baseproc.py", line 293, in init super( BatchQueueProcess, self ).init( File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/baseproc.py", line 136, in init self.start() File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/pipeline.py", line 302, in start self._start( self.receiver.next ) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/baseproc.py", line 118, in _start self.logger.exception(ex) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/crossprocesslogging/baseClient.py", line 106, in exception data = self.serialise( data, True ) File "/home/encore/fp-05-microsoft-sentinel-connector-4.0.0/estreamer/crossprocesslogging/baseClient.py", line 35, in serialise message = data.class.name + ': ' + data.message AttributeError: 'TypeError' object has no attribute 'message'

jslitzkerttcu commented 3 years ago

I'm also seeing this error on Ubuntu. Did you ever get it resolved?

jonesy5090 commented 3 years ago

Yes we worked around it by changing a couple of lines in packet.py, see:

https://github.com/CiscoSecurity/fp-05-microsoft-sentinel-connector/commit/6d666d649cd3540a2dfe630dc17d3a105995e297#diff-ba5e33f55afd05ea4507892ff39dc2bd5cfc52808a28d5ace9b84cc6b1462d0a

obviously caveated that it's not been validated by the dev etc. etc. In our testing we found it was an issue with a particular log sent by the FirePower firewalls - we never found out which log / data caused it to fall over, however changing packet.py seems to have worked and it's been running ever since.

Good luck!

jslitzkerttcu commented 3 years ago

Got it, thanks!

BigTechNick commented 3 years ago

@jonesy5090 I still have the same error even after changing the 2 lines mentioned above in packet.py. image

jonesy5090 commented 3 years ago

Looks like a different error to the one we were receiving sorry. Are you using the Python3 branch or the main branch of the connector?

BigTechNick commented 3 years ago

Looks like a different error to the one we were receiving sorry. Are you using the Python3 branch or the main branch of the connector?

Python 3 branch.

BigTechNick commented 2 years ago

Any suggestions?

dev-sec7 commented 2 years ago

This has been an open issue since November 2020. Are there any plans of releasing a fix and closing this issue?

thinkdreams commented 2 years ago

We have a similar issue with the latest version of the script. FMC running 7.0.1 and python 3.8.10 on 20.04.4 (Ubuntu).

The script runs fine and dumps data to Azure Sentinel for a short time and then fails on the error (see below). Once this error occurs, a re-run of the script will continue to fail and not send data (assumption is it starts back up from the last checkpoint and is still not able to parse data).

Is there any updates to this? I have a TAC case open as well on it.

Traceback (most recent call last): File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/pipeline.py", line 139, in parse parser.parse() File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/adapters/binary.py", line 734, in parse self._parse( self.data, self.offset, self.record ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/adapters/binary.py", line 600, in _parse offset = self._parseAttributes( data, offset, attributes, record ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/adapters/binary.py", line 485, in _parseAttributes offset = self._parseBlock( data, offset, attribute, block ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/adapters/binary.py", line 225, in _parseBlock offset = self._parseAttributes( data, offset, blockDefinition, context ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/adapters/binary.py", line 315, in _parseAttributes raise ParsingException( estreamer.exception.ParsingException: _attributes() | offset (1398100638) > length (196) | blockType=167 recordType=94

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/baseproc.py", line 111, in _start callback() File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/receiver.py", line 159, in next self._parseMessageBundle( message ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/receiver.py", line 111, in _parseMessageBundle self._send( message ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/receiver.py", line 143, in _send self.callback( message ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/pipeline.py", line 431, in onEvent event = parse( message, self.settings ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/pipeline.py", line 155, in parse logger.warning( ex ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/crossprocesslogging/baseClient.py", line 94, in warning self.log(logging.WARNING, data) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/crossprocesslogging/baseClient.py", line 69, in log data = self.serialise( data ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/crossprocesslogging/baseClient.py", line 35, in serialise message = data.class.name + ': ' + data.message AttributeError: 'ParsingException' object has no attribute 'message'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/pipeline.py", line 280, in init super( Subscriber, self ).init( File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/baseproc.py", line 293, in init super( BatchQueueProcess, self ).init( File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/baseproc.py", line 136, in init self.start() File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/pipeline.py", line 302, in start self._start( self.receiver.next ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/baseproc.py", line 118, in _start self.logger.exception(ex) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/crossprocesslogging/baseClient.py", line 106, in exception data = self.serialise( data, True ) File "/home/omsadmin/fp-05-microsoft-sentinel-connector-main/estreamer/crossprocesslogging/baseClient.py", line 35, in serialise message = data.class.name + ': ' + data.message AttributeError: 'AttributeError' object has no attribute 'message'

thinkdreams commented 2 years ago

I have confirmed that data IS sent to Sentinel until it hits the brick wall. Also ran the cef_troubleshooter.py script specified by Microsoft and confirmed that our OMS agent server + this Encore script is sending data properly to Azure Sentinel.

So it's definitely something wrong in parsing the data.

Bear in mind that the Integration component of FMC (eStreamer) should be a supported Cisco thing - and it should include this script, since the repo is maintained by Cisco. I'd really like to get commentary by either the maintainer on this forum, or by Cisco TAC as to the disposition of this script - as we'd really like to complete this part of the integration into our SIEM/SOAR (i.e. Sentinel). I've linked Cisco TAC to this issue in the hopes that they'll address this for all of us with questions and possibly get some direction on this.

rmplatts commented 2 years ago

I have confirmed that data IS sent to Sentinel until it hits the brick wall. Also ran the cef_troubleshooter.py script specified by Microsoft and confirmed that our OMS agent server + this Encore script is sending data properly to Azure Sentinel.

So it's definitely something wrong in parsing the data.

Bear in mind that the Integration component of FMC (eStreamer) should be a supported Cisco thing - and it should include this script, since the repo is maintained by Cisco. I'd really like to get commentary by either the maintainer on this forum, or by Cisco TAC as to the disposition of this script - as we'd really like to complete this part of the integration into our SIEM/SOAR (i.e. Sentinel). I've linked Cisco TAC to this issue in the hopes that they'll address this for all of us with questions and possibly get some direction on this.

Did you get any resolution to this issue? I am facing the exact same problem with very similar errors to yours.

Fresh built Ubuntu 20.04 (Azure market) cef_installer.py run and connected successfully to sentinel workspace fp-05-microsoft-sentinel-connector clones from git estreamer.conf edited with correct IP ./encore.sh test is successful (certs are OK) ./encore.sh start is successful

We see events begin to populate into sentinel so all firewall rules, certificates and setup is OK. However after a random time sometimes 5 minutes, sometimes 30 mins the process fails and events stop coming in.

BigTechNick commented 2 years ago

I have confirmed that data IS sent to Sentinel until it hits the brick wall. Also ran the cef_troubleshooter.py script specified by Microsoft and confirmed that our OMS agent server + this Encore script is sending data properly to Azure Sentinel. So it's definitely something wrong in parsing the data. Bear in mind that the Integration component of FMC (eStreamer) should be a supported Cisco thing - and it should include this script, since the repo is maintained by Cisco. I'd really like to get commentary by either the maintainer on this forum, or by Cisco TAC as to the disposition of this script - as we'd really like to complete this part of the integration into our SIEM/SOAR (i.e. Sentinel). I've linked Cisco TAC to this issue in the hopes that they'll address this for all of us with questions and possibly get some direction on this.

Did you get any resolution to this issue? I am facing the exact same problem with very similar errors to yours.

Fresh built Ubuntu 20.04 (Azure market) cef_installer.py run and connected successfully to sentinel workspace fp-05-microsoft-sentinel-connector clones from git estreamer.conf edited with correct IP ./encore.sh test is successful (certs are OK) ./encore.sh start is successful

We see events begin to populate into sentinel so all firewall rules, certificates and setup is OK. However after a random time sometimes 5 minutes, sometimes 30 mins the process fails and events stop coming in.

There is no fix. We deployed a bash script to restart the VM and the process every time it stops.

thinkdreams commented 2 years ago

Just an update, as of right now I've still got an active TAC case opened to try and resolve this without employing bash scripts, etc. Cisco needs to address the root cause rather than requiring us to work around it using scripts.

Incidentally @BigTechNick , would you be able to post the script you're using for reference? I'd rather not restart the entire VM (since it's also the Azure OMS server for other functions) but love to see how you're getting around it.

rmplatts commented 2 years ago

Just an update, as of right now I've still got an active TAC case opened to try and resolve this without employing bash scripts, etc. Cisco needs to address the root cause rather than requiring us to work around it using scripts.

Incidentally @BigTechNick , would you be able to post the script you're using for reference? I'd rather not restart the entire VM (since it's also the Azure OMS server for other functions) but love to see how you're getting around it.

I tried to use a .sh script auto started from cron that would just stop and start the encore.sh every hour. However this fails as the paths in the encore.sh are not understood when starting from cron.

thinkdreams commented 1 year ago

Well, after much ado with Cisco TAC - I had a call with their devs today. Finally I now understand why things weren't working for me at least - and I'm hoping this is the fix for you guys as well.

Basically, Cisco's been updating the main repo, not the python3 branch. The main repo is now using python3 (and not python2 as was expected). The main branch works after I reinstalled it and ran it in the foreground. Going to do more testing, but data is flowing now.

Cisco stated they would be updating this repo and removing the python3 branch entirely to avoid confusion.