Open sathieu opened 1 year ago
@sathieu can you add more info , have you tried doing testing that with trivy ?
Here is an example job that does ... the job:
apiVersion: batch/v1
kind: Job
metadata:
annotations:
name: scan-node
namespace: trivy-system
spec:
template:
metadata:
labels:
app.kubernetes.io/managed-by: trivy-operator
vulnerabilityReport.scanner: Trivy
spec:
automountServiceAccountToken: false
initContainers:
- args:
- --cache-dir
- /tmp/trivy/.cache
- image
- --download-db-only
- --db-repository
- gitlab-registry.kube.example.org/external-registries/ghcr.io/aquasecurity/trivy-db
command:
- trivy
env:
- name: HTTP_PROXY
valueFrom:
configMapKeyRef:
key: trivy.httpProxy
name: trivy-operator-trivy-config
optional: true
- name: HTTPS_PROXY
valueFrom:
configMapKeyRef:
key: trivy.httpsProxy
name: trivy-operator-trivy-config
optional: true
- name: NO_PROXY
valueFrom:
configMapKeyRef:
key: trivy.noProxy
name: trivy-operator-trivy-config
optional: true
- name: GITHUB_TOKEN
valueFrom:
secretKeyRef:
key: trivy.githubToken
name: trivy-operator-trivy-config
optional: true
- name: TRIVY_INSECURE
value: "true"
image: ghcr.io/aquasecurity/trivy:0.43.1
imagePullPolicy: IfNotPresent
name: download-db
resources:
limits:
cpu: 500m
memory: 500M
requests:
cpu: 100m
memory: 100M
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /tmp
name: tmp
containers:
- args:
- -c
- trivy rootfs --slow /tmp/rootfs
--debug
--cache-dir /tmp/trivy/.cache
--skip-dirs /var/lib/containerd --skip-dirs /var/lib/kubelet/pods --skip-dirs /var/log --skip-dirs /run/containerd
--timeout 20m
--quiet --skip-db-update --format json > /tmp/scan/result.json
&& bzip2 -c /tmp/scan/result.json | base64
command:
- /bin/sh
env:
- name: TRIVY_SEVERITY
valueFrom:
configMapKeyRef:
key: trivy.severity
name: trivy-operator-trivy-config
optional: true
- name: TRIVY_IGNORE_UNFIXED
valueFrom:
configMapKeyRef:
key: trivy.ignoreUnfixed
name: trivy-operator-trivy-config
optional: true
- name: TRIVY_OFFLINE_SCAN
valueFrom:
configMapKeyRef:
key: trivy.offlineScan
name: trivy-operator-trivy-config
optional: true
- name: TRIVY_JAVA_DB_REPOSITORY
valueFrom:
configMapKeyRef:
key: trivy.javaDbRepository
name: trivy-operator-trivy-config
optional: true
- name: TRIVY_TIMEOUT
valueFrom:
configMapKeyRef:
key: trivy.timeout
name: trivy-operator-trivy-config
optional: true
- name: TRIVY_SKIP_FILES
valueFrom:
configMapKeyRef:
key: trivy.skipFiles
name: trivy-operator-trivy-config
optional: true
- name: TRIVY_SKIP_DIRS
valueFrom:
configMapKeyRef:
key: trivy.skipDirs
name: trivy-operator-trivy-config
optional: true
- name: HTTP_PROXY
valueFrom:
configMapKeyRef:
key: trivy.httpProxy
name: trivy-operator-trivy-config
optional: true
- name: HTTPS_PROXY
valueFrom:
configMapKeyRef:
key: trivy.httpsProxy
name: trivy-operator-trivy-config
optional: true
- name: NO_PROXY
valueFrom:
configMapKeyRef:
key: trivy.noProxy
name: trivy-operator-trivy-config
optional: true
- name: TRIVY_INSECURE
value: "true"
image: ghcr.io/aquasecurity/trivy:0.43.1
imagePullPolicy: IfNotPresent
name: node-scan
resources:
limits:
cpu: 500m
memory: 500M
requests:
cpu: 100m
memory: 100M
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /tmp/rootfs
name: rootfs
readOnly: true
- mountPath: /tmp/scan
name: scanresult
restartPolicy: Never
schedulerName: default-scheduler
securityContext:
runAsGroup: 10000
runAsNonRoot: false
runAsUser: 10000
serviceAccount: trivy-operator
serviceAccountName: trivy-operator
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: tmp
- emptyDir: {}
name: scanresult
- name: rootfs
hostPath:
path: /
Basically, it is the same as a scan vulnerability report:
trivy rootfs
--skip-dirs
(should be configurable)Was is missing in my test is using node affinity and create one job per node.
My job run as an non-root user, this should also be configurable I think.
@sathieu look like could be a nice feature , would you like to raise a PR ?
@chen-keinan I won't have the time to work on this within the next 2-3 months. I don't promise anything after π !
I'm also interested with vulnerability scans for non-Kubernetes nodes. I want to have CRDs and metrics for all my managed servers (most are k8s). I need to think a bit more about this.
@chen-keinan I won't have the time to work on this within the next 2-3 months. I don't promise anything after π !
I'm also interested with vulnerability scans for non-Kubernetes nodes. I want to have CRDs and metrics for all my managed servers (most are k8s). I need to think a bit more about this.
@sathieu thank for the reply, I'l give you sometime to think on it. if you do not think you'll find time to work on it later on' let me know and I'll pick it up
@sathieu thank for the reply, I'l give you sometime to think on it. if you do not think you'll find time to work on it later on' let me know and I'll pick it up
@chen-keinan Feel free to pick it up now, if you have the time π₯ !
This issue is stale because it has been labeled with inactivity.
Not stale, but I have no time to handle it currently
@sathieu any ideas how to disable node scan completely?
@sathieu any ideas how to disable node scan completely?
disable this flag
This issue is stale because it has been labeled with inactivity.
not stale
This issue is stale because it has been labeled with inactivity.
hopefully not stale ππΌ
This issue is stale because it has been labeled with inactivity.
We still need this feature π
This issue is stale because it has been labeled with inactivity.
Not staleβ¦
It would be great to scan the whole rootfs of the node (excluding common CRI directories like
/var/lib/containerd
).This would scan for vulns in systemd, kubeadm, kubelet, ... and any binary locally installed.
Those vulnerabilities would go in a new CRD (nodevulnerabilityreports?).