Open dzilbermanvmw opened 1 year ago
Hi Daniel,
Will look into this. Thank you for the details you have shared.
Hi,
Use this config for fluentbit and it will parse the log lines correctly as json into CW logs (I'm using the aws for fluentbit helm chart):
# Grep Filter drops logs that are only whitespace.
additionalFilters: |
[FILTER]
Name grep
Match *
Regex $log (.|\s)*\S(.|\s)*
[FILTER]
Name parser
Match *
Key_name log
Parser falco
[FILTER]
Name aws
Match *
imds_version v1
az true
ec2_instance_id true
ec2_instance_type true
private_ip true
ami_id true
account_id true
hostname true
vpc_id true
additionalInputs: |
[INPUT]
Name tail
Tag falco.*
Path /var/log/containers/falco*.log
DB /var/log/flb_falco.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10
additionalOutputs: |
[OUTPUT]
Name cloudwatch
Match falco.**
region eu-west-2
log_group_name falco
log_stream_name alerts
auto_create_group true
service:
extraParsers: |
[PARSER]
Name falco
Format Regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep Off
# Command | Decoder | Field | Optional Action
# =============|==================|=================
Decode_Field_As json log
I have deployed the container run-time monitoring solution based on the blog: https://aws.amazon.com/blogs/security/continuous-runtime-security-monitoring-with-aws-security-hub-and-falco/ I have updated FireLens/FluentBit deployment following referenced blog: https://aws.amazon.com/blogs/containers/implementing-runtime-security-in-amazon-eks-using-cncf-falco/ and updated aws-for-fluentbit image to the :latest and added [FILTER] settings for FluentBit per: https://docs.fluentbit.io/manual/pipeline/filters/aws-metadata
Resulting logs entries that are sent to CloudWatch have the following format: { "account_id": "133776528597", "ami_id": "ami-0d8857ce76f65c24d", "az": "us-west-1c", "ec2_instance_id": "i-0a1624d2e71fb1654", "ec2_instance_type": "m5.2xlarge", "hostname": "ip-10-0-11-242.us-west-1.compute.internal", "log": "2023-02-02T01:49:29.750728784Z stdout F {\"hostname\":\"falco-z95gm\",\"output\":\"01:49:26.707737550: Error File below /etc opened for writing (user= user_loginuid=-1 command=touch /etc/53 pid=1864 parent=bash pcmdline=bash file=/etc/53 program=touch gparent= ggparent= gggparent= container_id=f687a1640776 image=docker.io/library/nginx) k8s.ns=default k8s.pod=nginx-test2 container=f687a1640776\",\"priority\":\"Error\",\"rule\":\"Write below etc\",\"source\":\"syscall\",\"tags\":[\"filesystem\",\"mitre_persistence\"],\"time\":\"2023-02-02T01:49:26.707737550Z\", \"output_fields\": {\"container.id\":\"f687a1640776\",\"container.image.repository\":\"docker.io/library/nginx\",\"evt.time\":1675302566707737550,\"fd.name\":\"/etc/53\",\"k8s.ns.name\":\"default\",\"k8s.pod.name\":\"nginx-test2\",\"proc.aname[2]\":null,\"proc.aname[3]\":null,\"proc.aname[4]\":null,\"proc.cmdline\":\"touch /etc/53\",\"proc.name\":\"touch\",\"proc.pcmdline\":\"bash\",\"proc.pid\":1864,\"proc.pname\":\"bash\",\"user.loginuid\":-1,\"user.name\":\"\"}}",
"private_ip": "10.0.11.242",
"vpc_id": "vpc-069155af741792b14"
}
Lambda function 'AwsSecurityhubFalcoEcsEksIn-lambdafunction' responsible for parsing those logs and generating ASFF formatted messages is failing to parse "nested" JSON fields like "output", "priority" and "output_fields" using syntax: logentry["log"][""]. Looks like value of "log" JSON string need to be cleaned up and formatted so it can be parsed further, please have a look thank you!