Open Conklin-Spencer-bah opened 1 month ago
Hey there, we also stumbled upon this issue and found it odd that you can not split simple json arrays into multiple documents.
I worked around this by using the following hack: 1) parse the json key logEvents 2) stringify the key again 3) manipulate this stringified key: a) remove the Array characters in the beginning b) remove the Array character at the end c) Insert a unique delimiter character 4) split event by this delimiter character
process these split events by a chained pipeline.
I am on mobile so my apologies for the formatting:
preprocess_pipeline: sink s3 processor:
message_process_pipeline src: pipeline: name: preprocess_pipeline processor:
We asked for the AWS service team to support splitting json arrays into multiple docs, but this hack seems to work for us now.
Edit: formatting seems to remove the backslashes, will try to adjust formatting but I hope you get the idea
@JunChatani Nice workaround! Thanks!
We asked for the AWS service team to support splitting json arrays into multiple docs, but this hack seems to work for us now.
I agree that split_event
should support splitting event on json arrays. Have you opened an github issue for this already? If not, we can use this issue to track.
I haven’t opened an issue yet, perhaps this one can be used to track it then.
@Conklin-Spencer-bah ,
To clarify, would you want this output?
{ "messageType": "DATA_MESSAGE", "owner": "123456789", "logGroup": "foo", "logStream": "bar", "logEvents": {"id": "789102", "message": "another log message here", "timestamp" 1727880215114}}
and
{ "messageType": "DATA_MESSAGE", "owner": "123456789", "logGroup": "foo", "logStream": "bar", "logEvents": {"id": "123456", "message": "some log message here", "timestamp" 1727880215114}}
and
{ "messageType": "DATA_MESSAGE", "owner": "123456789", "logGroup": "foo", "logStream": "bar", "logEvents": {"id": "99999", "message": "yet another log message here", "timestamp" 1727880215114}}
you got it, this is particularly useful in cases where you have log messages sent from CloudWatch -> Firehose -> S3.
The object in S3 is stored like I initially put. You can easily emulate this by subscribing a firehose to a cwl group and sending the results to S3.
By implementing this enhancement it would actually resolve the limitation that is noted on the CloudWatch log ingestion today (which suggests directly streaming logs to OpenSearch instead of using Firehose). See here.
"Currently, Firehose does not support the delivery of CloudWatch Logs to Amazon OpenSearch Service destination because Amazon CloudWatch combines multiple log events into one Firehose record and Amazon OpenSearch Service cannot accept multiple log events in one record."
Did you want to do this because OpensearchDashboards cannot visualize nested fields?
Describe the bug It could potentially possible to do this however I have not been able to find anything in documentation that covers it.
If you have CloudWatch Logs -> Data Firehose -> S3 and want to pull that into DataPrepper it brings in the multi line event.
The structure seems to be like so:
What I was hoping to do was use DataPrepper to read in the log message from S3 (that is like above) and then parse out the "logEvents" and treat each entry as an individual log message to publish to S3 & OpenSearch alike.
S3 being it will allow me to create a neat structure of a prefix with accountid/log-group/YYYY/MM/DD/HH
However I am not sure that it is possible to extract logEvents dictionary that contains a list of arrays and treat them as separate events.
To Reproduce Steps to reproduce the behavior:
Expected behavior I was expecting a feature within DataPrepper to support something like so:
Environment (please complete the following information):
Additional context AWS Managed OpenSearch & AWS Managed OSIS is being used. I setup a local container deployment to expedite testing and still see the same issue.