kubecost / features-bugs

A public repository for filing of Kubecost feature requests and bugs. Please read the issue guidelines before filing an issue here.
0 stars 0 forks source link

[Bug] A pricing source is unavailable when everything seems fine #16

Closed bzlom closed 1 month ago

bzlom commented 1 year ago

Kubecost Helm Chart Version

1.106.2

Kubernetes Version

1.27.4

Kubernetes Platform

EKS

Description

We've configured kubecost with AWS price reconcilliation as described here: https://docs.kubecost.com/install-and-configure/install/cloud-integration/aws-cloud-integrations

After the configuration it looks like everything is working fine: image but we are still getting a warning about "A pricing source is unavailable: Savings Plan, Reserved Instance, and Out-Of-Cluster" not sure if this is a but or we're missing some configuration: image

Steps to reproduce

  1. Install kubecost
  2. configure price reconcilliation: https://docs.kubecost.com/install-and-configure/install/cloud-integration/aws-cloud-integrations
  3. configure spot instance pricing: https://docs.kubecost.com/install-and-configure/install/cloud-integration/aws-cloud-integrations/aws-spot-instances

Expected behavior

No warnings or some errors pointing as to why something isn't working as intended.

Impact

No response

Screenshots

No response

Logs

No response

Slack discussion

No response

Troubleshooting

chipzoller commented 1 year ago

Can you show us what the resulting cost-analyzer Deployment looks like (please use kubectl get -o yaml for this) after your chart is deployed? And also please confirm the values you used.

bzlom commented 1 year ago

@chipzoller providing the requested details, first the kubectl output:

kubectl get pods -n monitoring kubecost-cost-analyzer-75f4dc69b7-4jm99 -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2023-10-19T13:10:46Z"
  generateName: kubecost-cost-analyzer-75f4dc69b7-
  labels:
    app: cost-analyzer
    app.kubernetes.io/instance: kubecost
    app.kubernetes.io/name: cost-analyzer
    pod-template-hash: 75f4dc69b7
  name: kubecost-cost-analyzer-75f4dc69b7-4jm99
  namespace: monitoring
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: kubecost-cost-analyzer-75f4dc69b7
    uid: f0ee289d-8d5e-40bb-93d6-b1391f915fdd
  resourceVersion: "946710948"
  uid: 6b82ee8f-287c-4653-93da-3c530cf4e262
spec:
  containers:
  - env:
    - name: GRAFANA_ENABLED
      value: "false"
    - name: HELM_VALUES
      value: ..................
    - name: READ_ONLY
      value: "false"
    - name: PROMETHEUS_SERVER_ENDPOINT
      valueFrom:
        configMapKeyRef:
          key: prometheus-server-endpoint
          name: kubecost-cost-analyzer
    - name: CLOUD_PROVIDER_API_KEY
      value: xxxxxxxxxxxxxxxxxxxxxx
    - name: CONFIG_PATH
      value: /var/configs/
    - name: DB_PATH
      value: /var/db/
    - name: CLUSTER_PROFILE
      value: production
    - name: REMOTE_WRITE_PASSWORD
      value: admin
    - name: EMIT_POD_ANNOTATIONS_METRIC
      value: "false"
    - name: EMIT_NAMESPACE_ANNOTATIONS_METRIC
      value: "false"
    - name: EMIT_KSM_V1_METRICS
      value: "true"
    - name: EMIT_KSM_V1_METRICS_ONLY
      value: "false"
    - name: LOG_COLLECTION_ENABLED
      value: "true"
    - name: PRODUCT_ANALYTICS_ENABLED
      value: "false"
    - name: ERROR_REPORTING_ENABLED
      value: "true"
    - name: VALUES_REPORTING_ENABLED
      value: "true"
    - name: SENTRY_DSN
      value: https://xxxxxxxxxxxxxxxxx@o394722.ingest.sentry.io/5245431
    - name: LEGACY_EXTERNAL_API_DISABLED
      value: "false"
    - name: OUT_OF_CLUSTER_PROM_METRICS_ENABLED
      value: "false"
    - name: CACHE_WARMING_ENABLED
      value: "true"
    - name: SAVINGS_ENABLED
      value: "true"
    - name: ETL_ENABLED
      value: "true"
    - name: ETL_STORE_READ_ONLY
      value: "false"
    - name: ETL_CLOUD_USAGE_ENABLED
      value: "false"
    - name: CLOUD_ASSETS_EXCLUDE_PROVIDER_ID
      value: "false"
    - name: ETL_CLOUD_REFRESH_RATE_HOURS
      value: "6"
    - name: ETL_CLOUD_QUERY_WINDOW_DAYS
      value: "7"
    - name: ETL_CLOUD_RUN_WINDOW_DAYS
      value: "3"
    - name: ETL_RESOLUTION_SECONDS
      value: "300"
    - name: ETL_MAX_PROMETHEUS_QUERY_DURATION_MINUTES
      value: "1440"
    - name: ETL_DAILY_STORE_DURATION_DAYS
      value: "91"
    - name: ETL_HOURLY_STORE_DURATION_HOURS
      value: "49"
    - name: ETL_WEEKLY_STORE_DURATION_WEEKS
      value: "53"
    - name: ETL_FILE_STORE_ENABLED
      value: "true"
    - name: ETL_ASSET_RECONCILIATION_ENABLED
      value: "true"
    - name: ETL_USE_UNBLENDED_COST
      value: "false"
    - name: CLOUD_COST_ENABLED
      value: "true"
    - name: CLOUD_COST_IS_INCLUDE_LIST
      value: "false"
    - name: CLOUD_COST_LABEL_LIST
    - name: CLOUD_COST_TOP_N
      value: "1000"
    - name: CONTAINER_STATS_ENABLED
      value: "false"
    - name: RECONCILE_NETWORK
      value: "true"
    - name: KUBECOST_METRICS_POD_ENABLED
      value: "false"
    - name: PV_ENABLED
      value: "true"
    - name: MAX_QUERY_CONCURRENCY
      value: "5"
    - name: UTC_OFFSET
      value: "+00:00"
    - name: CLUSTER_ID
      value: development
    - name: SQL_ADDRESS
      value: pgprometheus
    - name: COST_EVENTS_AUDIT_ENABLED
      value: "false"
    - name: RELEASE_NAME
      value: kubecost
    - name: KUBECOST_NAMESPACE
      value: monitoring
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: KUBECOST_TOKEN
      valueFrom:
        configMapKeyRef:
          key: kubecost-token
          name: kubecost-cost-analyzer
    image: public.ecr.aws/kubecost/cost-model:prod-1.106.2
    imagePullPolicy: Always
    livenessProbe:
      failureThreshold: 200
      httpGet:
        path: /healthz
        port: 9003
        scheme: HTTP
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: cost-model
    ports:
    - containerPort: 9003
      name: tcp-model
      protocol: TCP
    - containerPort: 9090
      name: tcp-frontend
      protocol: TCP
    readinessProbe:
      failureThreshold: 200
      httpGet:
        path: /healthz
        port: 9003
        scheme: HTTP
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources:
      requests:
        cpu: 200m
        memory: 55Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/configs
      name: persistent-configs
    - mountPath: /var/secrets
      name: service-key-secret
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-mqgqd
      readOnly: true
  - env:
    - name: GET_HOSTS_FROM
      value: dns
    image: public.ecr.aws/kubecost/frontend:prod-1.106.2
    imagePullPolicy: Always
    livenessProbe:
      failureThreshold: 200
      httpGet:
        path: /healthz
        port: 9003
        scheme: HTTP
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: cost-analyzer-frontend
    readinessProbe:
      failureThreshold: 200
      httpGet:
        path: /healthz
        port: 9003
        scheme: HTTP
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources:
      requests:
        cpu: 10m
        memory: 55Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /tmp
      name: tmp
    - mountPath: /etc/nginx/conf.d/
      name: nginx-conf
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-mqgqd
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: ip-10-5-92-221.eu-west-1.compute.internal
  nodeSelector:
    clientGroup: Shared
    computeGroup: General
    kubernetes.io/arch: arm64
    workerGroup: Infrastructure
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1001
    runAsGroup: 1001
    runAsUser: 1001
  serviceAccount: kubecost
  serviceAccountName: kubecost
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoSchedule
    key: arm64
    operator: Equal
    value: "true"
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: tmp
  - configMap:
      defaultMode: 420
      items:
      - key: nginx.conf
        path: default.conf
      name: nginx-conf
    name: nginx-conf
  - name: service-key-secret
    secret:
      defaultMode: 420
      secretName: cloud-service-key
  - name: persistent-configs
    persistentVolumeClaim:
      claimName: kubecost-cost-analyzer
  - name: kube-api-access-mqgqd
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2023-10-19T13:10:46Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2023-10-19T13:11:51Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2023-10-19T13:11:51Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2023-10-19T13:10:46Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://9f321bdb117550bc74b618cd0975177ea8516cb41edc855c0dc4812192e6dee2
    image: public.ecr.aws/kubecost/frontend:prod-1.106.2
    imageID: public.ecr.aws/kubecost/frontend@sha256:fc51b00cd88ca16e7703dd51c43cf873f0a61c48b15dc10f8c92396d26b3e457
    lastState: {}
    name: cost-analyzer-frontend
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2023-10-19T13:11:13Z"
  - containerID: containerd://cba62baf7057d8678aa286fcc87491efddf9119b7ffd50c981b411b022c4e644
    image: public.ecr.aws/kubecost/cost-model:prod-1.106.2
    imageID: public.ecr.aws/kubecost/cost-model@sha256:ac82aefeda1abbd6027280b451c7c0e43a8f7a70f30519b57809a6be771f3cad
    lastState: {}
    name: cost-model
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2023-10-19T13:11:10Z"
  hostIP: 10.5.92.221
  phase: Running
  podIP: 10.5.89.81
  podIPs:
  - ip: 10.5.89.81
  qosClass: Burstable
  startTime: "2023-10-19T13:10:46Z"

kubectl get deployment -n monitoring kubecost-cost-analyzer -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "7"
    kubectl.kubernetes.io/last-applied-configuration: |
      {.....................................
  creationTimestamp: "2023-10-04T11:18:04Z"
  generation: 7
  labels:
    app: cost-analyzer
    app.kubernetes.io/instance: kubecost
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: cost-analyzer
    argocd.argoproj.io/instance: kubecost
    helm.sh/chart: cost-analyzer-1.106.2
  name: kubecost-cost-analyzer
  namespace: monitoring
  resourceVersion: "946710952"
  uid: ed55dee2-0b1c-419e-ae4e-8602a3bf9cff
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: cost-analyzer
      app.kubernetes.io/instance: kubecost
      app.kubernetes.io/name: cost-analyzer
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: cost-analyzer
        app.kubernetes.io/instance: kubecost
        app.kubernetes.io/name: cost-analyzer
    spec:
      containers:
      - env:
        - name: GRAFANA_ENABLED
          value: "false"
        - name: HELM_VALUES
          value: ...............................
        - name: READ_ONLY
          value: "false"
        - name: PROMETHEUS_SERVER_ENDPOINT
          valueFrom:
            configMapKeyRef:
              key: prometheus-server-endpoint
              name: kubecost-cost-analyzer
        - name: CLOUD_PROVIDER_API_KEY
          value: xxxxxxxxxxxxxxxxxxx
        - name: CONFIG_PATH
          value: /var/configs/
        - name: DB_PATH
          value: /var/db/
        - name: CLUSTER_PROFILE
          value: production
        - name: REMOTE_WRITE_PASSWORD
          value: admin
        - name: EMIT_POD_ANNOTATIONS_METRIC
          value: "false"
        - name: EMIT_NAMESPACE_ANNOTATIONS_METRIC
          value: "false"
        - name: EMIT_KSM_V1_METRICS
          value: "true"
        - name: EMIT_KSM_V1_METRICS_ONLY
          value: "false"
        - name: LOG_COLLECTION_ENABLED
          value: "true"
        - name: PRODUCT_ANALYTICS_ENABLED
          value: "false"
        - name: ERROR_REPORTING_ENABLED
          value: "true"
        - name: VALUES_REPORTING_ENABLED
          value: "true"
        - name: SENTRY_DSN
          value: https://xxxxxxxxxxxxxxxxxxxxxxxxxxx@o394722.ingest.sentry.io/5245431
        - name: LEGACY_EXTERNAL_API_DISABLED
          value: "false"
        - name: OUT_OF_CLUSTER_PROM_METRICS_ENABLED
          value: "false"
        - name: CACHE_WARMING_ENABLED
          value: "true"
        - name: SAVINGS_ENABLED
          value: "true"
        - name: ETL_ENABLED
          value: "true"
        - name: ETL_STORE_READ_ONLY
          value: "false"
        - name: ETL_CLOUD_USAGE_ENABLED
          value: "false"
        - name: CLOUD_ASSETS_EXCLUDE_PROVIDER_ID
          value: "false"
        - name: ETL_CLOUD_REFRESH_RATE_HOURS
          value: "6"
        - name: ETL_CLOUD_QUERY_WINDOW_DAYS
          value: "7"
        - name: ETL_CLOUD_RUN_WINDOW_DAYS
          value: "3"
        - name: ETL_RESOLUTION_SECONDS
          value: "300"
        - name: ETL_MAX_PROMETHEUS_QUERY_DURATION_MINUTES
          value: "1440"
        - name: ETL_DAILY_STORE_DURATION_DAYS
          value: "91"
        - name: ETL_HOURLY_STORE_DURATION_HOURS
          value: "49"
        - name: ETL_WEEKLY_STORE_DURATION_WEEKS
          value: "53"
        - name: ETL_FILE_STORE_ENABLED
          value: "true"
        - name: ETL_ASSET_RECONCILIATION_ENABLED
          value: "true"
        - name: ETL_USE_UNBLENDED_COST
          value: "false"
        - name: CLOUD_COST_ENABLED
          value: "true"
        - name: CLOUD_COST_IS_INCLUDE_LIST
          value: "false"
        - name: CLOUD_COST_LABEL_LIST
        - name: CLOUD_COST_TOP_N
          value: "1000"
        - name: CONTAINER_STATS_ENABLED
          value: "false"
        - name: RECONCILE_NETWORK
          value: "true"
        - name: KUBECOST_METRICS_POD_ENABLED
          value: "false"
        - name: PV_ENABLED
          value: "true"
        - name: MAX_QUERY_CONCURRENCY
          value: "5"
        - name: UTC_OFFSET
          value: "+00:00"
        - name: CLUSTER_ID
          value: development
        - name: SQL_ADDRESS
          value: pgprometheus
        - name: COST_EVENTS_AUDIT_ENABLED
          value: "false"
        - name: RELEASE_NAME
          value: kubecost
        - name: KUBECOST_NAMESPACE
          value: monitoring
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: KUBECOST_TOKEN
          valueFrom:
            configMapKeyRef:
              key: kubecost-token
              name: kubecost-cost-analyzer
        image: public.ecr.aws/kubecost/cost-model:prod-1.106.2
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 200
          httpGet:
            path: /healthz
            port: 9003
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: cost-model
        ports:
        - containerPort: 9003
          name: tcp-model
          protocol: TCP
        - containerPort: 9090
          name: tcp-frontend
          protocol: TCP
        readinessProbe:
          failureThreshold: 200
          httpGet:
            path: /healthz
            port: 9003
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 200m
            memory: 55Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/configs
          name: persistent-configs
        - mountPath: /var/secrets
          name: service-key-secret
      - env:
        - name: GET_HOSTS_FROM
          value: dns
        image: public.ecr.aws/kubecost/frontend:prod-1.106.2
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 200
          httpGet:
            path: /healthz
            port: 9003
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: cost-analyzer-frontend
        readinessProbe:
          failureThreshold: 200
          httpGet:
            path: /healthz
            port: 9003
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 10m
            memory: 55Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /tmp
          name: tmp
        - mountPath: /etc/nginx/conf.d/
          name: nginx-conf
      dnsPolicy: ClusterFirst
      nodeSelector:
        clientGroup: Shared
        computeGroup: General
        kubernetes.io/arch: arm64
        workerGroup: Infrastructure
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1001
        runAsGroup: 1001
        runAsUser: 1001
      serviceAccount: kubecost
      serviceAccountName: kubecost
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: arm64
        operator: Equal
        value: "true"
      volumes:
      - emptyDir: {}
        name: tmp
      - configMap:
          defaultMode: 420
          items:
          - key: nginx.conf
            path: default.conf
          name: nginx-conf
        name: nginx-conf
      - name: service-key-secret
        secret:
          defaultMode: 420
          secretName: cloud-service-key
      - name: persistent-configs
        persistentVolumeClaim:
          claimName: kubecost-cost-analyzer
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2023-10-04T11:18:04Z"
    lastUpdateTime: "2023-10-04T11:18:04Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2023-10-04T11:18:04Z"
    lastUpdateTime: "2023-10-19T13:11:51Z"
    message: ReplicaSet "kubecost-cost-analyzer-75f4dc69b7" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 7
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

Values:

kubecostProductConfigs:
  # AWS spot instance configuration
  awsSpotDataRegion: eu-west-1
  awsSpotDataBucket: spot-datafeed-devtest
  projectID: "123456789" # AWS account ID

  # Kubecost AWS user with IAM permissions
  createServiceKeySecret: true
  awsServiceKeyName: "xxxxxxxxxxxxxxxxxxxxx"
  awsServiceKeyPassword: "yyyyyyyyyyyyyyyyyyyyyyy"

  # Provide CUR config values to kubecost
  athenaProjectID: "123456789"
  athenaBucketName: "s3://aws-athena-query-results-xxxxxxxxxxx"
  athenaRegion: "eu-west-1"
  athenaDatabase: "athenacurcfn_kubecost_report_for_price_reconciliation"
  athenaTable: "kubecost_report_for_price_reconciliation"

global:
  grafana:
    enabled: false
    proxy: false

pricingCsv:
  enabled: false
  location:
    provider: "AWS"
    region: "us-east-1"
    URI: s3://kc-csv-test/pricing_schema.csv # a valid file URI
    csvAccessCredentials: pricing-schema-access-secret

nodeSelector:
  workerGroup: Infrastructure
  kubernetes.io/arch: arm64
  clientGroup: Shared
  computeGroup: General
tolerations:
  - key: "arm64"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"

affinity: {}

# If true, creates a PriorityClass to be used by the cost-analyzer pod
priority:
  enabled: false
  # value: 1000000

# If true, enable creation of NetworkPolicy resources.
networkPolicy:
  enabled: false

podSecurityPolicy:
  enabled: false

# Enable this flag if you need to install with specfic image tags
# imageVersion: prod-1.97.0

kubecostFrontend:
  image: public.ecr.aws/kubecost/frontend
  imagePullPolicy: Always
  resources:
    requests:
      cpu: "10m"
      memory: "55Mi"
    #limits:
    #  cpu: "100m"
    #  memory: "256Mi"

kubecostModel:
  image: public.ecr.aws/kubecost/cost-model
  imagePullPolicy: Always
  warmCache: true
  warmSavingsCache: true
  etl: true
  # The total number of days the ETL storage will build
  etlStoreDurationDays: 120
  maxQueryConcurrency: 5
  # utcOffset represents a timezone in hours and minutes east (+) or west (-)
  # of UTC, itself, which is defined as +00:00.
  # See the tz database of timezones to look up your local UTC offset:
  # https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
  utcOffset: "+00:00"
  resources:
    requests:
      cpu: "200m"
      memory: "55Mi"
    #limits:
    #  cpu: "800m"
    #  memory: "256Mi"

serviceAccount:
  create: true # Set this to false if you're bringing your own service account.
  annotations: {}
  name: kubecost # this is a custom service account created with eksctl specifically for Spot Instance Pricing

# Define persistence volume for cost-analyzer
persistentVolume:
  size: 32Gi
  dbSize: 32.0Gi
  enabled: true # Note that setting this to false means configurations will be wiped out on pod restart.
  # storageClass: "-" #
  # existingClaim: kubecost-cost-analyzer # a claim in the same namespace as kubecost

ingress:
  enabled: true
  # className: nginx
  annotations:
    kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  paths: ["/"] # There's no need to route specifically to the pods-- we have an nginx deployed that handles routing
  hosts:
    - kubecost.mydomain.com
  tls: []
  #  - secretName: cost-analyzer-tls
  #    hosts:
  #      - cost-analyzer.local

service:
  type: ClusterIP
  port: 9090
  targetPort: 9090
  # nodePort:
  labels: {}
  annotations: {}

prometheus:
  server:
    # If clusterIDConfigmap is defined, instead use user-generated configmap with key CLUSTER_ID
    # to use as unique cluster ID in kubecost cost-analyzer deployment.
    # This overrides the cluster_id set in prometheus.server.global.external_labels.
    # NOTE: This does not affect the external_labels set in prometheus config.
    # clusterIDConfigmap: cluster-id-configmap
    image:
      repository: public.ecr.aws/kubecost/prometheus
      tag: v2.35.0
    resources: {}
    # limits:
    #   cpu: 500m
    #   memory: 512Mi
    # requests:
    #   cpu: 500m
    #   memory: 512Mi
    global:
      scrape_interval: 1m
      scrape_timeout: 10s
      evaluation_interval: 1m
      external_labels:
        # Each cluster should have a unique ID
        cluster_id: "development"
    persistentVolume:
      size: 32Gi
      enabled: true
    extraArgs:
      query.max-concurrency: 1
      query.max-samples: 100000000

    nodeSelector:
      workerGroup: Infrastructure
      kubernetes.io/arch: arm64
      clientGroup: Shared
      computeGroup: General
    tolerations:
      - key: "arm64"
        operator: "Equal"
        value: "true"
        effect: "NoSchedule"

  configmapReload:
    prometheus:
      ## If false, the configmap-reload container will not be deployed
      ##
      enabled: false

      ## configmap-reload container name
      ##
      name: configmap-reload
      ## configmap-reload container image
      ##
      image:
        repository: public.ecr.aws/bitnami/configmap-reload
        tag: 0.7.1
        pullPolicy: IfNotPresent
      ## Additional configmap-reload container arguments
      ##
      extraArgs: {}
      ## Additional configmap-reload volume directories
      ##
      extraVolumeDirs: []
      ## Additional configmap-reload mounts
      ##
      extraConfigmapMounts: []
        # - name: prometheus-alerts
        #   mountPath: /etc/alerts.d
        #   subPath: ""
        #   configMap: prometheus-alerts
        #   readOnly: true
      ## configmap-reload resource requests and limits
      ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
      ##
      resources: {}

  kube-state-metrics:
    disabled: false
    nodeSelector:
      workerGroup: Infrastructure
      kubernetes.io/arch: arm64
      clientGroup: Shared
      computeGroup: General
    tolerations:
      - key: "arm64"
        operator: "Equal"
        value: "true"
        effect: "NoSchedule"

  nodeExporter:
    enabled: false

reporting:
  productAnalytics: false
chipzoller commented 1 year ago

Kindly add this as a code block so we can see (and search for in the future) your output.

bzlom commented 1 year ago

@chipzoller added, please see above

chipzoller commented 1 year ago

@jessegoodier or @thomasvn, can you please help qualify this as either a Helm issue or an app issue?

jessegoodier commented 1 year ago

We'd have to see the logs. This more of a support issue than a bug or helm issue. If you look at the cost-model container logs, do you see an errors around pricing?

bzlom commented 1 year ago

I see these sort of errors periodically popping up in cost-model logs:

2023-10-20T13:10:00.001743222Z ERR unable to get most recent valid asset set: could not obtain latest valid asset set
2023-10-20T13:10:00.00175535ZWRN got error could not obtain latest valid asset set for metric localDisks%%Development, not adding to cache
2023-10-20T13:10:00.002745642ZINF ETL: Asset: QueryAsset([2023-10-20T07:10:00+0000, 2023-10-20T13:10:00+0000), []) from StoreDriver[1h] 952.566µs [query 668.735µs] [cloud 26.873µs] [aggregate 256.679µs] [accumulate 148ns] [stop 131ns]
2023-10-20T13:10:00.002811313Z ERR unable to get most recent valid asset set: could not obtain latest valid asset set
2023-10-20T13:10:00.002824386ZWRN got error could not obtain latest valid asset set for metric localDisks%%Production, not adding to cache
2023-10-20T13:10:00.003831838ZINF ETL: Asset: QueryAsset([2023-10-20T07:10:00+0000, 2023-10-20T13:10:00+0000), []) from StoreDriver[1h] 972.517µs [query 711.082µs] [cloud 25.833µs] [aggregate 235.233µs] [accumulate 247ns] [stop 122ns]
2023-10-20T13:10:00.003894782Z ERR unable to get most recent valid asset set: could not obtain latest valid asset set
2023-10-20T13:10:00.003906337ZWRN got error could not obtain latest valid asset set for metric localDisks%%High-Availability, not adding to cache

Other than that I can't find any errors related to prices, furthemore the Kubecost pricing reconcilliation works (from my initial tests) and is returning the exact data I see on AWS Cost and Usage reports: Kubecost image AWS image

This is what led me to believe it might be either a bug or maybe I'm missing some other configuration

thomasvn commented 1 year ago

@bzlom Thanks for digging in here. Based on your configs & screenshots, you should be good to go! This error is safe to ignore and I've filed an internal ticket to resolve this [GTM-141].

For further context, this diagnostic is powered by the backend API /model/pricingSourceStatus. It's likely that the results of this API need to be fixed in the event that a Savings Plan or Reserved Instance doesn't exist.

chipzoller commented 1 year ago

Transferred

chipzoller commented 1 month ago

Hello, in an effort to consolidate our bug and feature request tracking, we are deprecating using GitHub to track tickets. If this issue is still outstanding and you have not done so already, please raise a request at https://support.kubecost.com/.