Open smuryginim opened 5 years ago
Hello everybody. I'm currently in progress of migration to Cloudwatch logs and want to use the current plugin to stream logs to ES. Everything works fine, but when I have big messages in my scenario > 256 Kb, then they are broken into parts. Doc is here https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CalculatePutEventsEntrySize.html
In my scenario, I publish log in JSON format in cloudwatch with some additional metadata, like "app", "profile", etc.
<appender name="json-output" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder"> <providers> <timestamp/> <mdc/> <logLevel/> <loggerName/> <pattern> <pattern> { "profile": "${PROFILE}", "app": "${APPNAME}" } </pattern> </pattern> <threadName/> <message/> </providers> </encoder> </appender>
In Logstash
cloudwatch_logs { start_position => "end" log_group => ["/myGroupPrefix"] codec => json log_group_prefix => "true" }
So when the message is broken into parts Cloudwatch produces broken JSON. Do someone handle such cases with Cloudwatch plugin?
Hello everybody. I'm currently in progress of migration to Cloudwatch logs and want to use the current plugin to stream logs to ES. Everything works fine, but when I have big messages in my scenario > 256 Kb, then they are broken into parts. Doc is here https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CalculatePutEventsEntrySize.html
In my scenario, I publish log in JSON format in cloudwatch with some additional metadata, like "app", "profile", etc.
In Logstash
So when the message is broken into parts Cloudwatch produces broken JSON. Do someone handle such cases with Cloudwatch plugin?