Closed revaniki closed 3 years ago
Hi
Thank you for reporting this! I'm linking the original discussion on Slack as references:
Moreover, it seems to me this could be an issue related to how the falco-exporter is deployed by its helm chart. If it were confirmed, I would move this issue to the charts repository.
Finally, just a question:
Instead of manually creating the service monitor, have you tried to use the chart's option intended for that (i.e. serviceMonitor.enabled
)?
Thanks
thanks @leogr for your quick reply, I tried to redeploy falco-exporter with the --set serviceMonitor.enabled=true
switch, with no luck. Also tried to apply a custom ServiceMonitor config with no luck.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: falco-exporter
labels:
release: prometheus
app: falco-exporter
spec:
endpoints:
- port: '9376'
path: '/metrics'
namespaceSelector:
any: true
selector:
matchLabels:
app: falco-exporter
release: prometheus
I will destroy the cluster and start over.
Hi @leogr , I rebuilt the cluster following the steps verbatim and don't see falco-exporter under targets and service discovery in prom on http://127.0.0.1:9090. I'm not sure how to further troubleshoot at the moment. Trying to get help from folks with depth on Prom.
Can you confirm this has been solved with the solution described by this comment https://kubernetes.slack.com/archives/CMWH3EH32/p1619753500281500?thread_ts=1619144955.110600&cid=CMWH3EH32 ?
It seems to me this issue belongs to the falco-exporter repository. Moving it there
Assuming this issue has been solved as per this discussion :point_down: https://kubernetes.slack.com/archives/CMWH3EH32/p1619753500281500?thread_ts=1619144955.110600&cid=CMWH3EH32
/close
@leogr: Closing this issue.
Assuming this issue has been solved as per this discussion 👇 https://kubernetes.slack.com/archives/CMWH3EH32/p1619753500281500?thread_ts=1619144955.110600&cid=CMWH3EH32
/close
Is it possible to publish this solution in here? I've no access to this slack channel. Looks like they restricted it.
Yes me too i have no access to this slack channel. Is it possible to publish the solution here ?
Hey @Neneil94 and @BastienBNG
Unfortunately, that discussion is very long, and it's difficult to share here. Anyhow, the slack channel is not restricted. To access the discussion, you have to:
#falco
channel :point_right: https://kubernetes.slack.com/messages/falcoI hope I've been of some help :)
Describe the bug
I've partnered with Udacity to create a course on microservices security. Falco is front and center per the good work of our community :wink:
Working through demos for the very last lesson in the course on runtime monitoring and incident response. Intent is to teach students how to use falco for runtime monitoring and incident response. Roughly following Leo's awesome blog- https://falco.org/blog/falco-kind-prometheus-grafana/#install-prometheus
After much debugging, I'm not seeing the falco metrics in Prometheus and subsequently not in Grafana. I'm under a huge time crunch, we need to ship the course, this is the last technical blocker. I would greatly appreciate your help folks. Hard blocked on completion and created alot of content for this already.
How to reproduce
To repro:
Create a two node (node1) RKE cluster via Vagrantfile and cluster.yaml
SSH into node1 and node2 and install kernel drivers for falco
rpm --import https://falco.org/repo/falcosecurity-3672BA8F.asc
curl -s -o /etc/zypp/repos.d/falcosecurity.repo https://falco.org/repo/falcosecurity-rpm.repo
Install kernel headers: zypper -n install kernel-default-devel
certs/
directory on the machine I'm running helm fromhelm install prometheus prometheus-community/kube-prometheus-stack
Port forward all relevant pods:
kubectl --namespace default falco-exporter-jq869 9376
kubectl --namespace default port-forward prometheus-grafana-66c946f558-7j9hq 3000
kubectl --namespace default port-forward prometheus-prometheus-kube-prometheus-prometheus-0 9090
prometheus-additional.yaml
fileadditional-scrape-configs.yaml
kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional.yaml > additional-scrape-configs.yaml
Apply the
additional-scrape-configs.yaml
viakubectl apply -f additional-scrape-configs.yaml
Create custom ServiceMonitor file
falco_service_monitor.yaml and apply
kubectl apply -f falco_service_monitor.yaml`grafana dashboard is empty
The
falco-exporter
ServiceMonitor populates in Prometheus but no metrics.prometheus service discovery ok for
falco-exporter
prometheus metric source, no
falco_events
Expected behaviour
Expect to see falco_events in Prometheus under and Grafana
Screenshots
falco-exporter
prometheus service discovery
prometheus metric source
grafana dashboard
Environment
RKE
docker.io/falcosecurity/falco:0.28.0
Vagrantbox running openSUSE Leap hosted on macOS Catalina
NAME="openSUSE Leap" VERSION="15.2" ID="opensuse-leap" ID_LIKE="suse opensuse" VERSION_ID="15.2" PRETTY_NAME="openSUSE Leap 15.2" ANSI_COLOR="0;32" CPE_NAME="cpe:/o:opensuse:leap:15.2" BUG_REPORT_URL="https://bugs.opensuse.org" HOME_URL="https://www.opensuse.org/"
Linux localhost 5.3.18-lp152.72-default falcosecurity/falco#1 SMP Wed Apr 14 10:13:15 UTC 2021 (013936d) x86_64 x86_64 x86_64 GNU/Linux
See above
Additional context