Closed kratos81 closed 6 years ago
Hey @kratos81
It looks like Falco is having some trouble loading rules from /etc/falco/falco.yaml. I'm going to try to reproduce the error using the Helm Chart so if you can share the complete values.yaml file it will help me to debug your issue.
Thanks!
Hi @nestorsalceda thanks for your speedy response find below the information
image:
repository: sysdig/falco
tag: dev
pullPolicy: Always
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 200m
memory: 300Mi
requests:
cpu: 100m
memory: 256Mi
rbac:
# Create and use rbac resources
create: true
serviceAccount:
# Create and use serviceAccount resources
create: true
# Use this value as serviceAccountName
name: falco
fakeEventGenerator:
enabled: false
replicas: 1
daemonset: {}
# Allow the DaemonSet to perform a rolling update on helm update
# ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
# If you do want to specify resources, uncomment the following lines, adjust
# them as necessary, and remove the curly braces after 'resources:'.
# updateStrategy: RollingUpdate
falco:
# The location of the rules file(s). This can contain one or more paths to
# separate rules files.
rulesFile:
- /etc/falco/falco_rules.yaml
- /etc/falco/falco_rules.local.yaml
- /etc/falco/rules.d
# Whether to output events in json or text
jsonOutput: true
# When using json output, whether or not to include the "output" property
# itself (e.g. "File below a known binary directory opened for writing
# (user=root ....") in the json output.
jsonIncludeOutputProperty: true
# Send information logs to stderr and/or syslog Note these are *not* security
# notification logs! These are just Falco lifecycle (and possibly error) logs.
logStderr: true
logSyslog: true
# Minimum log level to include in logs. Note: these levels are
# separate from the priority field of rules. This refers only to the
# log level of falco's internal logging. Can be one of "emergency",
# "alert", "critical", "error", "warning", "notice", "info", "debug".
logLevel: info
# Minimum rule priority level to load and run. All rules having a
# priority more severe than this level will be loaded/run. Can be one
# of "emergency", "alert", "critical", "error", "warning", "notice",
# "info", "debug".
priority: debug
# Whether or not output to any of the output channels below is
# buffered.
bufferedOutputs: false
# A throttling mechanism implemented as a token bucket limits the
# rate of falco notifications. This throttling is controlled by the following configuration
# options:
# - rate: the number of tokens (i.e. right to send a notification)
# gained per second. Defaults to 1.
# - max_burst: the maximum number of tokens outstanding. Defaults to 1000.
#
# With these defaults, falco could send up to 1000 notifications after
# an initial quiet period, and then up to 1 notification per second
# afterward. It would gain the full burst back after 1000 seconds of
# no activity.
outputs:
rate: 1
maxBurst: 1000
# Where security notifications should go.
# Multiple outputs can be enabled.
syslogOutput:
enabled: true
# If keep_alive is set to true, the file will be opened once and
# continuously written to, with each output message on its own
# line. If keep_alive is set to false, the file will be re-opened
# for each output message.
#
# Also, the file will be closed and reopened if falco is signaled with
# SIGUSR1.
fileOutput:
enabled: false
keepAlive: false
filename: ./events.txt
stdoutOutput:
enabled: true
# Possible additional things you might want to do with program output:
# - send to a slack webhook:
# program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"
# - logging (alternate method than syslog):
# program: logger -t falco-test
# - send over a network connection:
# program: nc host.example.com 80
# If keep_alive is set to true, the program will be started once and
# continuously written to, with each output message on its own
# line. If keep_alive is set to false, the program will be re-spawned
# for each output message.
#
# Also, the program will be closed and reopened if falco is signaled with
# SIGUSR1.
programOutput:
enabled: true
keepAlive: false
program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXcustomLinkXXX/xxxxxxx/“
customRules:
# Although Falco comes with a nice default rule set for detecting weird
# behavior in containers, our users are going to customize the run-time
# security rule sets or policies for the specific container images and
# applications they run. This feature can be handled in this section.
#
# Example:
#
# rules-traefik.yaml: |-
# [ rule body ]
integrations:
#json_output: true
#program_output:
#enabled: true
#keep_alive: false
#program: "jq '{text: .output}' | curl -d @- -X POST "https://hooks.slack.com/services/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"
gcscc:
enabled: false
webhookUrl: http://sysdig-gcscc-connector.default.svc.cluster.local:8080/events
webhookAuthenticationToken: b27511f86e911f20b9e0f9c8104b4ec4
# If Nats Output integration is enabled, falco will be configured to use this
# integration as file_output and sets the following values:
# * json_output: true
# * json_include_output_property: true
# * file_output:
# enabled: true
# keep_alive: true
# filename: /var/run/falco/nats
natsOutput:
enabled: false
natsUrl: "nats://nats.nats-io.svc.cluster.local:4222"
# Allow falco to run on Kubernetes 1.6 masters.
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
Let me know if you need any further information. I only redacted the web hook url
Got it!
program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXcustomLinkXXX/xxxxxxx/“ program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXcustomLinkXXX/xxxxxxx/“
The problem is that quotation marks are not escaped, the good value should be:
program: "\"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXcustomLinkXXX/xxxxxxx/\""
Is a bit tricky with YAML for configMap in Kubernetes and that value in falco.yaml file.
And finally, I have shortened your configuration file:
resources:
limits:
cpu: 200m
memory: 300Mi
requests:
cpu: 100m
memory: 256Mi
falco:
jsonOutput: false
programOutput:
enabled: true
keepAlive: false
program: "\"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXcustomLinkXXX/xxxxxxx/\""
And notice that I'm using latest sysdig/falco image instead of dev one. Right now, latest is newer than dev and is working fine.
So, with this configuration I think you can continue with your POC and if you have more issues, I can help you here or in our Public Slack if you want to share some thoughts.
ok , thanks I'll try that and let you know how it goes.
Although , I should mention , I was using the latest image and getting the same error , doing some research I found some forums where they mentioned the dev image solved the issue prompting me to change the image.
However , I'll make the changes and post the results. Thanks !
Hey @kratos81 !
Hope everything is going well. Did you make some progress on this issue?
We are going to close this issue but if you find a bug, please reopen it.
Thanks and don't forget that we can help you in our Public Slack if you need more help.
Thanks!
@kratos81 we are closing this issue, if you need further help let us know on Slack!
I have been trying to install falco as a POC but it keeps erring out. I am using the helm chart deploy it and using the default rules. We have a GKE cluster with a combination of CoreOs and Ubuntu node pools , however I set the taints and tolerations such that pods from the daemonset can ONLY be deployed on the ubuntu nodes
This is the error Im getting
That was the error I was getting. I had seen some posts where it was mentioned to use the dev image that would resolve the issue. I tried that and it did not work
These are snippets of my values.yaml file
We are running kubernetes 1.10.5 , helm version is -
Client: &version.Version{SemVer:"v2.9.1", Server: &version.Version{SemVer:"v2.9.1",