Closed rukender closed 19 hours ago
@rukender are you using helm install ...
to deploy ?
@rukender are you using
helm install ...
to deploy?
Yes, we are using helm chart to deploy in our k8s cluster using ArgoCD.
@rukender are you using
helm install ...
to deploy?Yes, we are using helm chart to deploy in our k8s cluster using ArgoCD.
can you please do the following:
helm uninstall trivy-operator -n trivy-system
kubectl delete crd vulnerabilityreports.aquasecurity.github.io
kubectl delete crd exposedsecretreports.aquasecurity.github.io
kubectl delete crd configauditreports.aquasecurity.github.io
kubectl delete crd clusterconfigauditreports.aquasecurity.github.io
kubectl delete crd rbacassessmentreports.aquasecurity.github.io
kubectl delete crd infraassessmentreports.aquasecurity.github.io
kubectl delete crd clusterrbacassessmentreports.aquasecurity.github.io
kubectl delete crd clustercompliancereports.aquasecurity.github.io
kubectl delete crd clusterinfraassessmentreports.aquasecurity.github.io
kubectl delete crd sbomreports.aquasecurity.github.io
kubectl delete crd clustersbomreports.aquasecurity.github.io
kubectl delete crd clustervulnerabilityreports.aquasecurity.github.io
helm install...
@rukender any update on this issue ?
@rukender any update on this issue ?
@chen-keinan sorry, here is the update on this
# kubectl get crds | grep aqua | awk '{print $1}' | xargs kubectl delete crd
customresourcedefinition.apiextensions.k8s.io "clustercompliancereports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "clusterconfigauditreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "clusterinfraassessmentreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "clusterrbacassessmentreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "clustersbomreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "configauditreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "exposedsecretreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "infraassessmentreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "rbacassessmentreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "sbomreports.aquasecurity.github.io" deleted
customresourcedefinition.apiextensions.k8s.io "vulnerabilityreports.aquasecurity.github.io" deleted
# kubectl get pods -n trivy
NAME READY STATUS RESTARTS AGE
node-collector-7d4f56f9fc-dp2sh 0/1 Completed 0 10m
trivy-operator-trivy-operator-shared-75c77ck6dhg 1/1 Running 0 10m
# kubectl get vulnerabilityreports.aquasecurity.github.io -o wide
No resources found in default namespace.
I can still see the same issue after deleting the CRDs Also, when I see in the ArgoUI, the trivy is only generating configauditreports.
I'm seeing these error message for different pods
{"level":"error"
"ts":"2024-04-16T14:14:55Z"
"msg":"Reconciler error"
"controller":"resourcequota"
"controllerGroup":""
"controllerKind":"ResourceQuota"
"ResourceQuota":{"name":"example-cluster"
"namespace":"example-cluster"}
"namespace":"example-cluster"
"name":"example-cluster"
"reconcileID":"ba815386-1cd7-400d-a8a9-fde4c53ee8dc"
"error":"the server could not find the requested resource (post configauditreports.aquasecurity.github.io)"
"stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:227"}
More details:
W0416 14:08:19.293929 1 reflector.go:535] pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:229: failed to list *v1alpha1.VulnerabilityReport: the server could not find the requested resource (get vulnerabilityreports.aquasecurity.github.io)
E0416 14:08:19.293956 1 reflector.go:147] pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:229: Failed to watch *v1alpha1.VulnerabilityReport: failed to list *v1alpha1.VulnerabilityReport: the server could not find the requested resource (get vulnerabilityreports.aquasecurity.github.io)
W0416 14:08:19.294003 1 reflector.go:535] pkg/mod/k8s.io/client-go@v0.28.4/tools/cache/reflector.go:229: failed to list *v1alpha1.ClusterConfigAuditReport: the server could not find the requested resource (get clusterconfigauditreports.aquasecurity.github.io)
@rukender do you mind doing a simple test with local kind cluster
and deploy trivy-operator to it with default settings
with helm install:
helm install trivy-operator aqua/trivy-operator \
--namespace trivy-system \
--create-namespace \
--version 0.21.4
just to make sure you are able to get vulnerabilities , to confirm if it Env. related , so we could think on other directions
@rukender do you mind doing a simple test with local
kind cluster
and deploy trivy-operator to it with default settings with helm install:helm install trivy-operator aqua/trivy-operator \ --namespace trivy-system \ --create-namespace \ --version 0.21.4
just to make sure you are able to get vulnerabilities , to confirm if it Env. related , so we could think on other directions
This is replicaset for trivy-operator:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
deployment.kubernetes.io/desired-replicas: '1'
deployment.kubernetes.io/max-replicas: '1'
deployment.kubernetes.io/revision: '1'
creationTimestamp: '2024-04-16T14:08:02Z'
generation: 1
labels:
app.kubernetes.io/instance: trivy-operator
app.kubernetes.io/name: trivy-operator-shared
pod-template-hash: 75c77c8cd9
name: trivy-operator-trivy-operator-shared-75c77c8cd9
namespace: trivy
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: Deployment
name: trivy-operator-trivy-operator-shared
uid: 6ff98613-3352-4484-b7a8-1203177faa68
resourceVersion: '660578862'
uid: ce89ad8c-1dfc-42da-be75-dab9c8f31481
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: trivy-operator
app.kubernetes.io/name: trivy-operator-shared
pod-template-hash: 75c77c8cd9
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: trivy-operator
app.kubernetes.io/name: trivy-operator-shared
pod-template-hash: 75c77c8cd9
spec:
automountServiceAccountToken: true
containers:
- env:
- name: OPERATOR_NAMESPACE
value: trivy
- name: OPERATOR_TARGET_NAMESPACES
- name: OPERATOR_EXCLUDE_NAMESPACES
- name: OPERATOR_TARGET_WORKLOADS
value: >-
pod,replicaset,replicationcontroller,statefulset,daemonset,cronjob,job
- name: OPERATOR_SERVICE_ACCOUNT
value: trivy-operator-trivy-operator-shared
- name: OPERATOR_LOG_DEV_MODE
value: 'false'
- name: OPERATOR_SCAN_JOB_TTL
- name: OPERATOR_SCAN_JOB_TIMEOUT
value: 5m
- name: OPERATOR_CONCURRENT_SCAN_JOBS_LIMIT
value: '9'
- name: OPERATOR_CONCURRENT_NODE_COLLECTOR_LIMIT
value: '1'
- name: OPERATOR_SCAN_JOB_RETRY_AFTER
value: 30s
- name: OPERATOR_BATCH_DELETE_LIMIT
value: '10'
- name: OPERATOR_BATCH_DELETE_DELAY
value: 10s
- name: OPERATOR_METRICS_BIND_ADDRESS
value: ':8080'
- name: OPERATOR_METRICS_FINDINGS_ENABLED
value: 'true'
- name: OPERATOR_METRICS_VULN_ID_ENABLED
value: 'false'
- name: OPERATOR_HEALTH_PROBE_BIND_ADDRESS
value: ':9090'
- name: OPERATOR_VULNERABILITY_SCANNER_ENABLED
value: 'true'
- name: OPERATOR_SBOM_GENERATION_ENABLED
value: 'true'
- name: OPERATOR_VULNERABILITY_SCANNER_SCAN_ONLY_CURRENT_REVISIONS
value: 'true'
- name: OPERATOR_SCANNER_REPORT_TTL
value: 24h
- name: OPERATOR_CACHE_REPORT_TTL
value: 120h
- name: CONTROLLER_CACHE_SYNC_TIMEOUT
value: 5m
- name: OPERATOR_CONFIG_AUDIT_SCANNER_ENABLED
value: 'true'
- name: OPERATOR_RBAC_ASSESSMENT_SCANNER_ENABLED
value: 'true'
- name: OPERATOR_INFRA_ASSESSMENT_SCANNER_ENABLED
value: 'true'
- name: OPERATOR_CONFIG_AUDIT_SCANNER_SCAN_ONLY_CURRENT_REVISIONS
value: 'true'
- name: OPERATOR_EXPOSED_SECRET_SCANNER_ENABLED
value: 'true'
- name: OPERATOR_METRICS_EXPOSED_SECRET_INFO_ENABLED
value: 'false'
- name: OPERATOR_METRICS_CONFIG_AUDIT_INFO_ENABLED
value: 'false'
- name: OPERATOR_METRICS_RBAC_ASSESSMENT_INFO_ENABLED
value: 'false'
- name: OPERATOR_METRICS_INFRA_ASSESSMENT_INFO_ENABLED
value: 'false'
- name: OPERATOR_METRICS_IMAGE_INFO_ENABLED
value: 'true'
- name: OPERATOR_METRICS_CLUSTER_COMPLIANCE_INFO_ENABLED
value: 'false'
- name: OPERATOR_WEBHOOK_BROADCAST_URL
- name: OPERATOR_WEBHOOK_BROADCAST_TIMEOUT
value: 30s
- name: OPERATOR_SEND_DELETED_REPORTS
value: 'false'
- name: OPERATOR_PRIVATE_REGISTRY_SCAN_SECRETS_NAMES
value: '{}'
- name: OPERATOR_ACCESS_GLOBAL_SECRETS_SERVICE_ACCOUNTS
value: 'true'
- name: OPERATOR_BUILT_IN_TRIVY_SERVER
value: 'false'
- name: TRIVY_SERVER_HEALTH_CHECK_CACHE_EXPIRATION
value: 10h
- name: OPERATOR_MERGE_RBAC_FINDING_WITH_CONFIG_AUDIT
value: 'false'
- name: OPERATOR_CLUSTER_COMPLIANCE_ENABLED
value: 'true'
image: 'registry.example.com:5000/aquasecurity/trivy-operator:0.17.1'
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 10
httpGet:
path: /healthz/
port: probes
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: trivy-operator-shared
ports:
- containerPort: 8080
name: metrics
protocol: TCP
- containerPort: 9090
name: probes
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz/
port: probes
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: '5'
memory: 10Gi
requests:
cpu: '5'
memory: 10Gi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsUser: 2000
supplementalGroups:
- 2000
serviceAccount: trivy-operator-trivy-operator-shared
serviceAccountName: trivy-operator-trivy-operator-shared
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
fullyLabeledReplicas: 1
observedGeneration: 1
readyReplicas: 1
replicas: 1
@rukender have you tested trivy-operator on local cluster kind ?
@chen-keinan I ran it locally and it is working fine.
% kubectl get vulnerabilityreports --all-namespaces -o wide
NAMESPACE NAME REPOSITORY TAG SCANNER AGE CRITICAL HIGH MEDIUM LOW UNKNOWN
kube-system daemonset-kindnet-kindnet-cni kindest/kindnetd v20240202-8f1494ea Trivy 87s 0 4 19 24 0
kube-system daemonset-kube-proxy-kube-proxy kube-proxy v1.29.2 Trivy 81s 0 2 6 17 0
kube-system pod-8b4f55974 kube-controller-manager v1.29.2 Trivy 78s 0 2 2 0 0
kube-system pod-etcd-kind-control-plane-etcd etcd 3.5.10-0 Trivy 83s 0 4 8 0 0
kube-system pod-kube-apiserver-kind-control-plane-kube-apiserver kube-apiserver v1.29.2 Trivy 88s 0 1 2 0 0
kube-system pod-kube-scheduler-kind-control-plane-kube-scheduler kube-scheduler v1.29.2 Trivy 87s 0 1 2 0 0
kube-system replicaset-coredns-76f75df574-coredns coredns/coredns v1.11.1 Trivy 83s 0 3 5 0 0
local-path-storage replicaset-5cbdfd7595 kindest/local-path-provisioner v20240202-8f1494ea Trivy 77s 0 2 11 13 0
trivy-system replicaset-trivy-operator-84b86599cb-trivy-operator aquasecurity/trivy-operator 0.19.4 Trivy 78s 0 0 1 2 0
Does it mean in my cluster we have issue with helm chart? or any other possible issue you think of?
@rukender I suspect it related to cluster env or config. have tried running default helm install ...
om you cluster or you use a different settings ?
I use the default settings
Regards, Rukender Attre, CISSP, GCIH, C|EH, VCP 8447821103
On Wed, 17 Apr 2024 at 6:13 PM, chenk @.***> wrote:
@rukender https://github.com/rukender I suspect it related to cluster env or config. have tried running default helm install ... om you cluster or you use a different settings ?
— Reply to this email directly, view it on GitHub https://github.com/aquasecurity/trivy-operator/issues/1996#issuecomment-2061173687, or unsubscribe https://github.com/notifications/unsubscribe-auth/AF6CIQXZZTP5YEAIA3ZY4MLY5ZU7VAVCNFSM6AAAAABGAH2XP6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRRGE3TGNRYG4 . You are receiving this because you were mentioned.Message ID: @.***>
This issue is stale because it has been labeled with inactivity.
I'm not able to see the vulnerability report in my other cluster but all the other reports I can see it. how can I fix this?
I'm using Trivy operator 0.19.1 version
Here is the trivy-operator configmap