FairwindsOps / goldilocks

Get your resource requests "Just Right"
https://fairwinds.com
Apache License 2.0
2.54k stars 135 forks source link

The goldilock dasboard is not displaying all containers in namespace #729

Open meadows12 opened 2 weeks ago

meadows12 commented 2 weeks ago

What happened?

I have enabled Goldilocks for all namespaces in the cluster, but I am not receiving recommendations for all containers in the dashboard. Only one container is missing, and I see this error in the dashboard logs: no matching Workloads found for VPA/goldilocks-webhook-portal.

When I checked the VPA Custom Resource Definition (CRD), I found that the VPA for this container has already been created with the appropriate recommendation.

What did you expect to happen?

The goldilock dashboard should display all the containers in namespace

How can we reproduce this?

Version

9.0.0

Search

Code of Conduct

Additional context

No response

sudermanjr commented 2 weeks ago

If the VPA is there, it is very unusual for the dashboard not to display it. However, reproducing this will be very difficult without a lot more detail. First step would be to turn up the logging on the dashboard, share those logs, and then share the full YAML of the workload and the VPA that was created.

meadows12 commented 2 weeks ago

Hey @sudermanjr, attaching logs from dashboard and VPA that was created

E1001 05:55:32.790906       1 summary.go:162] no matching Workloads found for VPA/goldilocks-db-init
E1001 05:55:32.888959       1 summary.go:162] no matching Workloads found for VPA/goldilocks-webhook-portal

And this is the VPA created

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  generation: 4
  labels:
    creator: Fairwinds
    source: goldilocks
  managedFields:
    - apiVersion: autoscaling.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:creator: {}
            f:source: {}
        f:spec:
          .: {}
          f:targetRef: {}
          f:updatePolicy:
            .: {}
            f:updateMode: {}
      manager: goldilocks
      operation: Update
    - apiVersion: autoscaling.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          .: {}
          f:conditions: {}
          f:recommendation:
            .: {}
            f:containerRecommendations: {}
      manager: recommender
      operation: Update
      subresource: status
  name: goldilocks-webhook-portal
  namespace: webhook
status:
  conditions:
    - lastTransitionTime: ***
      status: 'True'
      type: RecommendationProvided
  recommendation:
    containerRecommendations:
      - containerName: ***
        lowerBound:
          cpu: 10m
          memory: '52428800'
        target:
          cpu: 11m
          memory: '52428800'
        uncappedTarget:
          cpu: 11m
          memory: '52428800'
        upperBound:
          cpu: 11m
          memory: '52428800'
      - containerName: ***
        lowerBound:
          cpu: 22m
          memory: '716657383'
        target:
          cpu: 23m
          memory: '716711186'
        uncappedTarget:
          cpu: 23m
          memory: '716711186'
        upperBound:
          cpu: 23m
          memory: '743613777'
spec:
  targetRef:
    apiVersion: ***
    kind: ***
    name: webhook-portal
  updatePolicy:
    updateMode: 'Off'

And what do you mean by the YAML of the workload exactly ?

meadows12 commented 1 week ago

@sudermanjr btw, the kind of this is custom RD created of our own and creates custom workload. Not any Kubernetes workload like deployment, statefulset, demonset. Would that be an issue ?