Closed Jeansen closed 5 months ago
I also do not see any Exposed Secret Reports. I've enabled it in the adapter and made a restart. Same way I did for the Compliance Reports. And there are many such Exposed Secret Reports from Trivy. But I only see Vulnerability Reports ...
are your compliance reports in detailed mode?
https://github.com/fjogeleit/trivy-operator-polr-adapter#clustercompliancereport
Related to the ExposedSecretReports, do you see any logs in the adapter? Can you share your configuration?
are your compliance reports in detailed mode?
https://github.com/fjogeleit/trivy-operator-polr-adapter#clustercompliancereport
Hi @fjogeleit. I don't understand completely. I posted both, an example of ClusterComplianceReport and ClusterPolicyReport above. Anything I have to do regarding this 'detailed mode'?
Related to the ExposedSecretReports, do you see any logs in the adapter? Can you share your configuration?
First, here's what I see regarding available Rerports from Trivy:
marcel ~ projects k8s-playground main ↑4 +20 ●6 … k get exposedsecretreports.aquasecurity.github.io
NAME REPOSITORY TAG SCANNER AGE
daemonset-84647b4fc5 prometheus/node-exporter v1.7.0 Trivy 16m
daemonset-grafana-loki-logs-config-reloader prometheus-operator/prometheus-config-reloader v0.67.1 Trivy 9m51s
daemonset-grafana-loki-logs-grafana-agent grafana/agent v0.39.1 Trivy 9m51s
daemonset-grafana-promtail-promtail grafana/promtail 2.9.3 Trivy 15m
daemonset-loki-canary-loki-canary grafana/loki-canary 2.9.6 Trivy 8m18s
replicaset-metrics-server-857cf459f4-metrics-server metrics-server/metrics-server v0.6.4 Trivy 4m50s
statefulset-5cc9db4664 prometheus-operator/prometheus-config-reloader v0.72.0 Trivy 8m12s
statefulset-5dfc79bcfb prometheus/prometheus v2.50.1 Trivy 16m
statefulset-5fccc75856 prometheus-operator/prometheus-config-reloader v0.72.0 Trivy 16m
statefulset-5fcdb95887 prometheus-operator/prometheus-config-reloader v0.72.0 Trivy 16m
statefulset-69875b765c prometheus/alertmanager v0.27.0 Trivy 8m12s
statefulset-7bbcdc66ff prometheus-operator/prometheus-config-reloader v0.72.0 Trivy 8m12s
statefulset-loki-backend-loki grafana/loki 2.9.6 Trivy 2m42s
statefulset-loki-write-loki grafana/loki 2.9.6 Trivy 16m
Here is the active config:
apiVersion: v1
data:
config.yaml: |
server:
port: 8080
vulnerabilityReports:
enabled: true
timeout: 2
configAuditReports:
enabled: false
timeout: 2
cisKubeBenchReports:
enabled: false
timeout: 2
complianceReports:
enabled: true
timeout: 2
rbacAssessmentReports:
enabled: false
timeout: 2
exposedSecretReports:
enabled: true
timeout: 2
infraAssessmentReports:
enabled: false
timeout: 2
clusterInfraAssessmentReports:
enabled: false
timeout: 2
clusterVulnerabilityReports:
enabled: true
timeout: 2
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: trivy-operator-polr-adapter
meta.helm.sh/release-namespace: trivy-system
creationTimestamp: "2024-06-12T13:00:39Z"
labels:
app.kubernetes.io/instance: trivy-operator-polr-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: trivy-operator-polr-adapter
app.kubernetes.io/version: 0.8.0
helm.sh/chart: trivy-operator-polr-adapter-0.8.0
name: trivy-operator-polr-adapter-config
namespace: trivy-system
resourceVersion: "5703679"
uid: 6f97c694-3962-42aa-a613-41daa0de6e4f
And here is the log from the Adapter Pod:
[INFO] start server on port 8080
[INFO] VulnerabilityReports enabled
[INFO] ClusterVulnerabilityReports enabled
[INFO] ComplianceReports enabled
[INFO] ExposedSecretReports enabled
I0612 13:19:23.476177 1 controller.go:178] "msg"="Starting EventSource" "controller"="vulnerability" "source"="kind source: *v1alpha1.VulnerabilityReport"
I0612 13:19:23.476261 1 controller.go:186] "msg"="Starting Controller" "controller"="vulnerability"
I0612 13:19:23.476678 1 controller.go:178] "msg"="Starting EventSource" "controller"="compliance" "source"="kind source: *v1alpha1.ClusterComplianceReport"
I0612 13:19:23.476705 1 controller.go:186] "msg"="Starting Controller" "controller"="compliance"
I0612 13:19:23.476823 1 controller.go:178] "msg"="Starting EventSource" "controller"="clustervulnerability" "source"="kind source: *v1alpha1.ClusterVulnerabilityReport"
I0612 13:19:23.476835 1 controller.go:186] "msg"="Starting Controller" "controller"="clustervulnerability"
I0612 13:19:23.477017 1 controller.go:178] "msg"="Starting EventSource" "controller"="exposedsecret" "source"="kind source: *v1alpha1.ExposedSecretReport"
I0612 13:19:23.478994 1 controller.go:186] "msg"="Starting Controller" "controller"="exposedsecret"
I0612 13:19:23.796665 1 controller.go:220] "msg"="Starting workers" "controller"="vulnerability" "worker count"=1
I0612 13:19:23.799361 1 controller.go:220] "msg"="Starting workers" "controller"="clustervulnerability" "worker count"=1
I0612 13:19:23.799447 1 controller.go:220] "msg"="Starting workers" "controller"="exposedsecret" "worker count"=1
I0612 13:19:23.799511 1 controller.go:220] "msg"="Starting workers" "controller"="compliance" "worker count"=1
2024/06/12 13:19:32 [ERROR] failed to get PolicyReport CRD, ensure it is installed
Finally, what I see when I do kubectl get polr -A -l app.kubernetes.io/created-by=trivy-operator-polr-adapter
cert-manager trivy-vuln-polr-replicaset-cert-manager-57688f5dc6 0 2 11 0 2 20m
default trivy-vuln-polr-daemonset-grafana-loki-logs 0 4 17 0 2 17m
default trivy-vuln-polr-daemonset-grafana-promtail 0 38 138 0 2 20m
default trivy-vuln-polr-daemonset-loki-canary 0 2 15 0 2 15m
default trivy-vuln-polr-daemonset-prometheus-grafana-prometheus-node-exporter 0 1 9 0 2 20m
default trivy-vuln-polr-pod-cluster-example-full-1 0 8 34 0 4 32s
default trivy-vuln-polr-replicaset-external-dns-64cdbfdbc5 0 3 9 0 2 2m25s
default trivy-vuln-polr-replicaset-metrics-server-857cf459f4 0 8 16 0 2 12m
default trivy-vuln-polr-statefulset-alertmanager-prometheus-grafana-kube-pr-alertmanager 0 2 14 0 4 15m
default trivy-vuln-polr-statefulset-loki-backend 0 2 16 0 2 10m
default trivy-vuln-polr-statefulset-loki-write 0 2 16 0 2 20m
default trivy-vuln-polr-statefulset-prometheus-prometheus-grafana-kube-pr-prometheus 0 2 8 0 2 20m
ingress-nginx trivy-vuln-polr-daemonset-ingress-nginx-controller 0 6 31 0 6 14m
kube-system trivy-vuln-polr-daemonset-cilium 0 2 68 0 22 19m
kube-system trivy-vuln-polr-pod-etcd-master0-k8s.lan 0 5 33 0 10 5m55s
kube-system trivy-vuln-polr-pod-etcd-master1-k8s.lan 0 5 33 0 10 7m3s
kube-system trivy-vuln-polr-pod-etcd-master2-k8s.lan 0 5 33 0 10 10m
kube-system trivy-vuln-polr-pod-kube-apiserver-master0-k8s.lan 0 3 1 0 4 19m
kube-system trivy-vuln-polr-pod-kube-apiserver-master1-k8s.lan 0 3 1 0 4 19m
kube-system trivy-vuln-polr-pod-kube-apiserver-master2-k8s.lan 0 3 1 0 4 5m59s
kube-system trivy-vuln-polr-pod-kube-controller-manager-master0-k8s.lan 0 3 1 0 4 12m
kube-system trivy-vuln-polr-pod-kube-controller-manager-master1-k8s.lan 0 3 1 0 4 15m
kube-system trivy-vuln-polr-pod-kube-controller-manager-master2-k8s.lan 0 3 1 0 4 12m
kube-system trivy-vuln-polr-pod-kube-scheduler-master0-k8s.lan 0 3 0 0 4 7m12s
kube-system trivy-vuln-polr-pod-kube-scheduler-master1-k8s.lan 0 3 0 0 4 20m
kube-system trivy-vuln-polr-pod-kube-scheduler-master2-k8s.lan 0 3 0 0 4 10m
kube-system trivy-vuln-polr-pod-kube-vip-master0-k8s.lan 0 2 7 0 2 20m
kube-system trivy-vuln-polr-pod-kube-vip-master1-k8s.lan 0 2 7 0 2 4m46s
kube-system trivy-vuln-polr-pod-kube-vip-master2-k8s.lan 0 2 7 0 2 15m
kube-system trivy-vuln-polr-replicaset-cilium-operator-5bb69f47c8 0 0 2 0 4 5m16s
kube-system trivy-vuln-polr-replicaset-otel-collector-deployment-collector-dcc74f9db 0 4 1 0 2 10m
observability trivy-vuln-polr-cronjob-jaeger-es-index-cleaner 0 2 2 0 2 20m
observability trivy-vuln-polr-daemonset-jaeger-agent-daemonset 0 2 3 0 2 20m
observability trivy-vuln-polr-replicaset-open-telemetry-opentelemetry-operator-549b79d585 0 5 12 0 2 59s
observability trivy-vuln-polr-replicaset-opensearch-operator-controller-manager-659ddd6d5b 0 3 4 0 2 10m
olm trivy-vuln-polr-job-235dfa22797fca1d30bcfc76fc0fc2df93095fa2b43c123c733eb2f6d794db4 0 38 27 0 4 61s
olm trivy-vuln-polr-job-8005f8834ce6458be6a7fc814d6445b4ef7b29d1d2b5c16b423e07f27411cb4 0 38 27 0 4 20m
olm trivy-vuln-polr-job-8cf94d67b11aba394d352d3b633b8dcf017203748064e0b05891efaa5f26483 0 38 27 0 4 3m52s
olm trivy-vuln-polr-job-90f473a33ad2ab5e82be941fa529252c1731dfaf367bd29572c7050021eb5f1 0 38 27 0 4 3m7s
olm trivy-vuln-polr-job-c98bf5c376f0259eec8ca55be6552819c188a0c3a56a23a1da84be97131b6c5 0 38 27 0 4 20m
olm trivy-vuln-polr-job-cf31fb0d43efb65b10ca65cc2ffdfbd62f7700974502e0cfa7afa0c8a4733f4 0 38 27 0 4 2m20s
operators trivy-vuln-polr-replicaset-cnpg-controller-manager-5b58dffbdc 0 8 34 0 4 7m11s
policy-reporter trivy-vuln-polr-replicaset-policy-reporter-5dd7bdd8b4 0 0 0 0 2 19m
policy-reporter trivy-vuln-polr-replicaset-policy-reporter-ui-5c565766d 0 1 7 0 2 15m
rook-ceph trivy-vuln-polr-daemonset-csi-cephfsplugin 0 1 7 0 2 8m7s
rook-ceph trivy-vuln-polr-daemonset-csi-rbdplugin 0 1 7 0 2 5m44s
trivy-system trivy-vuln-polr-replicaset-trivy-operator-polr-adapter-5655fc6ff9 0 1 7 0 2 19m
If it helps, I run with Kubernetes 1.30.0
The report type of compliance reports needs to be configured in the trivy operator installation.
See https://github.com/aquasecurity/trivy-operator/blob/main/deploy/helm/values.yaml#L599
Default is summary which does not provide all information needed for the mapping
Oh I see. I must admit, I am currently in an early stage with this setup. Anyway, thanks for the hint. That worked. With respect to the Exposed Secrets Report I suspect there is nothing shown because - if I look at the details - there is nothing of value in it. All values, Critical, High, etc ... are 0.
thats possible if the report has no finding that no report is created. If you encounter still issues let me know.
I have the following Trivy Reports, for example:
If I look at teh cis report, it looks like this:
But
k get clusterpolicyreports.wgpolicyk8s.io
shows me:Looking at trivy-compliance-cpolr-cis for example:
What am I doing wrong?
All versions are at a current state, that is