Open madeline-k opened 3 years ago
@madeline-k with this being a year old, is the PR still relevant? are there processes holding it up?
@madeline-k Is work being done on this issue. This seems to be a long pending and inevitable feature for kinesis firehose.
There was another construct that had this functionality and it is not longer available: aws_kinesisstreams_kinesisfirehose_s3
You could define a set of processors:
processingConfiguration: {
enabled: true,
processors: [
{
type: 'MetadataExtraction',
parameters: [
{
parameterName: 'MetadataExtractionQuery',
parameterValue: '{tablename: .tableName}',
},
{
parameterName: 'JsonParsingEngine',
parameterValue: 'JQ-1.6',
},
],
},
{
type: 'AppendDelimiterToRecord',
parameters: [
{
parameterName: 'Delimiter',
parameterValue: '\\n',
},
],
},
],
}
}
What is the status of this feature in the L2? The L1 supports this.
This issue has received a significant amount of attention so we are automatically upgrading its priority. A member of the community will see the re-prioritization and provide an update on the issue.
Allow record format conversion for Kinesis Data Firehose delivery streams as described here: https://docs.aws.amazon.com/firehose/latest/dev/record-format-conversion.html
Part of https://github.com/aws/aws-cdk/issues/7536
Use Case
Converting the format of input data for a delivery stream from JSON to Apache Parquet or Apache ORC.
Proposed Solution
See RFC 340: Kinesis Firehose Delivery Stream
And this branch with the prototype
This is a :rocket: Feature Request