stackrox / kube-linter

KubeLinter is a static analysis tool that checks Kubernetes YAML files and Helm charts to ensure the applications represented in them adhere to best practices.
https://docs.kubelinter.io/
Apache License 2.0
2.83k stars 228 forks source link

[Bug] no pods found matching service labels #669

Open devShev opened 7 months ago

devShev commented 7 months ago

image

After scanning a file, linter reports that it cannot find pods by labels, although the labels in the deploy and service are correct.

web.yaml

apiVersion: v1
kind: Service
metadata:
  name: admin
spec:
  selector:
    app.name: ivea-django
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: admin
spec:
  replicas: 1
  selector:
    matchLabels:
      app.name: ivea-django
  template:
    metadata:
      labels:
        app.name: ivea-django
      annotations:
        lastDeployedDate: "{{ now | unixEpoch }}"
    spec:
      imagePullSecrets:
      - name: gitlab-registry-credentials
      containers:
      - name: main
        imagePullPolicy: Always
        image: {{ .Values.images.admin }}
---
apiVersion: batch/v1
kind: Job
metadata:
  name: admin-migrations-job-{{ .Release.Revision }}
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
spec:
  backoffLimit: 0
  ttlSecondsAfterFinished: 3600
  template:
    spec:
      restartPolicy: Never
      imagePullSecrets:
        - name: gitlab-registry-credentials
      containers:
        - name: main
          image: {{ .Values.images.admin }}
          imagePullPolicy: Always
          command: ["python3", "manage.py", "migrate"]
          envFrom:
            - secretRef:
                name: admin
sambonbonne commented 1 month ago

I had the same error, turned out that the Deployment manifests was invalid (I had strategy set to Recreate directly instead of strategy.type: Recreate).

It seems your deployment's .spec.template.spec.imagePullSecrets is invalid: it should be an array of string and not an array of objects.

Unfortunately, it means that an invalid deployment is simply ignored by kube-linter so this is an issue on its own.

Edit: just wanted to add that I catched this issue with kubeconform.