Azure / Azure-Sentinel

Cloud-native SIEM for intelligent security analytics for your entire enterprise.
https://azure.microsoft.com/en-us/services/azure-sentinel/
MIT License
4.61k stars 3.02k forks source link

logstash.outputs.azureloganalytics dropping messages #1471

Closed Colgaton closed 3 years ago

Colgaton commented 3 years ago

Describe the bug I am trying to forward AWS Cloudtrail messages to Azure LogAnalytics using logstash. Events are stored in a S3 bucket.

I am seeing the following error in logstash' logs, in random intervals:

2020-12-15T18:17:57,667][INFO ][logstash.outputs.azureloganalytics] Resending 20124 documents as log type AWSRedacted to DataCollector API in 2 seconds. [2020-12-15T18:18:02,241][INFO ][logstash.outputs.azureloganalytics] Resending 20124 documents as log type AWSRedacted to DataCollector API in 2 seconds. [2020-12-15T18:18:06,996][INFO ][logstash.outputs.azureloganalytics] Resending 20124 documents as log type AWSRedacted to DataCollector API in 2 seconds. [2020-12-15T18:18:11,138][INFO ][logstash.outputs.azureloganalytics] Resending 20124 documents as log type AWSRedacted to DataCollector API in 2 seconds. [2020-12-15T18:18:15,139][INFO ][logstash.outputs.azureloganalytics] Resending 20124 documents as log type AWSRedacted to DataCollector API in 2 seconds. [2020-12-15T18:18:19,257][ERROR][logstash.outputs.azureloganalytics] Could not resend 20124 documents, message is dropped. [2020-12-15T18:18:22,407][INFO ][logstash.outputs.azureloganalytics] Changing buffer size.[configuration='18524' , new_size='20345'] [2020-12-15T18:18:24,980][INFO ][logstash.outputs.azureloganalytics] Successfully posted 18524 logs into custom log analytics table[AWSRedacted ]. [2020-12-15T18:18:19,257][ERROR][logstash.outputs.azureloganalytics] Could not resend 20124 documents, message is dropped.

Lots of messages reach Azure but some of them never show up and I am afraid that they are the ones dropped like above.

Sometimes I also see :

[2020-12-15T18:17:57,331][ERROR][logstash.outputs.azureloganalytics] Exception in posting data to Azure Loganalytics.
[Exception: '413 Payload Too Large]'

Here is the output config:

microsoft-logstash-output-azure-loganalytics { workspace_id => "XXXXXXXXXXXXXXXXXXXXXXXXXX" workspace_key => "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" custom_log_table_name => "AWSRedacted" plugin_flush_interval => 5 amount_resizing => true codec => "json" }

Any idea?

github-actions[bot] commented 3 years ago

Thank you for submitting an Issue to the Azure Sentinel GitHub repo! You should expect an initial response to your Issue from the team within 5 business days. Note that this response may be delayed during holiday periods. For urgent, production-affecting issues please raise a support ticket via the Azure Portal.

github-actions[bot] commented 3 years ago

Thank you for submitting an Issue to the Azure Sentinel GitHub repo! You should expect an initial response to your Issue from the team within 5 business days. Note that this response may be delayed during holiday periods. For urgent, production-affecting issues please raise a support ticket via the Azure Portal.

github-actions[bot] commented 3 years ago

Thank you for submitting an Issue to the Azure Sentinel GitHub repo! You should expect an initial response to your Issue from the team within 5 business days. Note that this response may be delayed during holiday periods. For urgent, production-affecting issues please raise a support ticket via the Azure Portal.

NoamLandress commented 3 years ago

Hi @Colgaton , Thanks a lot for bringing this to our attention. Could you please open a support ticket on this issue so we could assist as soon as possible? Thanks, Noam.

Colgaton commented 3 years ago

Hi @NoamLandress are there instructions on how to open this ticket? Thank you.

NoamLandress commented 3 years ago

Hi @Colgaton , I apologize for the delay. Please feel free to open a support ticket here - https://ms.portal.azure.com/#create/Microsoft.Support As the service select- "Azure Sentinel" Thanks, Noam.

RobertMihai commented 3 years ago

Hi,

I've had the same issue a number of times. The easiest way to go around it is to just see how many messages you can reasonably get through before the payload becomes too large, and see hard limits to that.

It seems like there is not functionality implemented to reduce the number of records that are put on the wire when the API limit it's hit, and instead it's just sending the same amount over and over until it fails and drops the messages.

Is this something being currently worked on to fix?

Regards, Robert

sarah-yo commented 3 years ago

Being worked on in a support case, so closing Github issue

github-actions[bot] commented 3 years ago

Thank you for submitting an Issue to the Azure Sentinel GitHub repo! You should expect an initial response to your Issue from the team within 5 business days. Note that this response may be delayed during holiday periods. For urgent, production-affecting issues please raise a support ticket via the Azure Portal.

oencarnacion commented 2 years ago

Has anyone found a solution to this problem?

bhalbright commented 1 year ago

Being worked on in a support case, so closing Github issue

Was a fix ever published?

chralph commented 1 year ago

Is there a resolution to this? We are still having this issue and tried setting amount_resizing => "true" which works until we get a payload that is too large.. We are then stuck in a loop where no logs can be sent. The solution is to set max_items to an amount that you think it can handle however this is not a smart solution.