awslabs / landing-zone-accelerator-on-aws

Deploy a multi-account cloud foundation to support highly-regulated workloads and complex compliance requirements.
https://aws.amazon.com/solutions/implementations/landing-zone-accelerator-on-aws/
Apache License 2.0
530 stars 419 forks source link

How to update shard count for Kinesis Data Stream provisioned as part of LZA repo? #283

Open syanpriyajot opened 11 months ago

syanpriyajot commented 11 months ago

Describe the bug How to update shard count for Kinesis Data Stream provisioned as part of LZA repo?

To Reproduce We want increase the shard count for Kinesis data stream provisioned as part of LZA. Under logging-stack.ts, ShardCount for Kinesis Data stream created as part of central logging stack is hard coded. // // Create Kinesis Data Stream // Kinesis Stream - data stream which will get data from CloudWatch logs const logsKinesisStreamCfn = new cdk.aws_kinesis.CfnStream(this, 'LogsKinesisStreamCfn', { retentionPeriodHours: 24, shardCount: 1, streamEncryption: { encryptionType: 'KMS', keyId: logsReplicationKmsKey.keyArn, }, }); const logsKinesisStream = cdk.aws_kinesis.Stream.fromStreamArn( this, 'LogsKinesisStream', logsKinesisStreamCfn.attrArn, );

What is the best approach to update the value of shardcount via IAC when using LZA repo? Is it possible to update via config?

awsclemj commented 11 months ago

Hello, and thank you for reaching out to the Landing Zone Accelerator team!

Updating the shard count of the central logging Kinesis stream is not an available configuration option at this time. May I ask your use case for wanting to update the count? Are you encountering an error with centralized logging?

Thank you!

syanpriyajot commented 10 months ago

Hi Jimmy

Our use-case is that we want to use the Kinesis Data Stream created as part of the LZA setup for streaming logs to Splunk Cloud As part of LZA the architecture for central logging is:

CloudWatch Log groups -> Kinesis Data Stream -> Kinesis Data Firehose-1 -> S3 bucket Destination

As CloudWatch log groups have a limitation of 2 subscription filters per log group and because lot of components for aws to splunk design are same we want to reuse some of the infrastructure for our use case and stream data from central log account. So we want to use the same Kinesis Data Stream with increased number of shards rather than recreating everything from scratch.

CloudWatch Log groups -> Kinesis Data Stream -> Kinesis Data Firehose-2 -> HEC -> Splunk Cloud

awsclemj commented 10 months ago

Hello @syanpriyajot,

Setting the shard count outside of LZA is certainly an option, however since that does cause drift in your environment compared to the CloudFormation stack, the LZA team cannot guarantee that upgrades of the solution will not attempt to roll back the value. I can confirm that running any configuration updates in your current version will not overwrite the value, however.

I can confirm the LZA team is looking into updating the shard count of the solution-provisioned Kinesis stream to support additional scaling, but this may not necessarily be exposed as a configurable option.

Thank you, and please let us know if you have any follow-ups!

syanpriyajot commented 10 months ago

Hi @awsclemj

Thanks for your reply.

"I can confirm that running any configuration updates in your current version will not overwrite the value"

awsclemj commented 10 months ago

Hello, thank you for getting back to us.

Can you please elaborate as to what exactly do you mean by the above statement?

If you run updates through your pipeline without upgrading the version of the solution, this will not cause any changes to the Kinesis shard count. We cannot guarantee the shard count will not be rolled back after upgrading the solution.

I cannot provide answers to your remaining questions as the team still has not fully evaluated possible options for your use case. We will update this issue with more details should we come to a decision and/or have an ETA to provide you. Thanks again!